Leveraged ETF 討論區 (56)

1001 回覆
2 Like 2 Dislike
2023-02-08 17:41:42
公司電腦點開VPN
2023-02-08 17:43:25
戶口夠唔夠錢
有冇揀埋複雜槓桿產品
2023-02-08 17:45:29
2023-02-08 17:55:32
將要做嘅嘢send去自己手機,再用自己手機問ChatGPT
2023-02-08 18:32:16
補充多少少
CPI量度嘅係消費者支付嘅價格,但係一般消費者唔會直接買原油
所以原油價格上升同消費者支付嘅價格上升之間又會有延遲

現實世界複雜過textbook好多
2023-02-08 18:33:50
我已經用緊chatgpt幫自己寫好d code
2023-02-08 18:47:28
2023-02-08 19:02:15
2023-02-08 19:09:36
2023-02-08 19:12:45
2023-02-08 19:41:33
2023-02-08 21:11:04
件事太複雜唔係話睇多啲新聞就可以估得準
分享吓我觀察到嘅嘢都得嘅,但係不確定性好高,唔好當預言咁睇
同埋通脹數據係一回事,市場點解讀又係另一回事

- investing.com嘅預期係Core CPI 5.5% (MoM 0.4%) CPI 6.2% (0.5%),通脹下降幅度比之前兩個月慢
- Inflation Nowcasting估5.58%同6.44%,不過佢嘅模型淨係睇gasoline價格,幾乎肯定係高估咗
- 非農就業數據(調整後)比預期高好多
- 澳洲、西班牙通脹都超出預期
- 原油、汽油、白銀、銅價上升

咁樣睇嚟通脹會下跌,但跌幅細過之前三個月,為聯儲局加息提供更多理據

- 但係鮑威爾琴日先神情輕鬆好有自信咁講”disinflationary process in the U.S. economy has begun”,睇你對佢有幾大信心啦
- CPI改計法改咗兩樣嘢(https://lih.kg/yxedePX),第1項改動正正係影響居高不下且佔24%嘅OER,依項改動可以令2013-2016年嘅CPI下跌0.1%。2022-23年咁特殊,下跌幾多真係要搵幾個PhD嚟計。第2項改動投資Talk君1月29日條片話反而會令通脹上升,不過準確度我覺得未必高
- 投資Talk君今日條片都有分析通脹,佢有一句係話「核心通脹過去幾個月都有下降趨勢,最合理嘅假設係假設佢繼續跌」依句都有一定道理,睇你同唔同意啦
2023-02-08 21:47:42
仲有M2 都幾十年黎再下跌
2023-02-08 21:51:38
cls
2023-02-08 21:53:52
依個係支持衰退嘅好強原因
2023-02-08 22:11:48

外國有人建議Starbucks出付費打尖服務
2023-02-08 22:14:27
Google介紹新推出嘅AI,點知個AI答錯問題,股價盤前即刻大跌
2023-02-08 22:26:39
風水輪流轉
以前係Microsoft喺發佈會hang機,即場用Edge嚟下載Chrome,而家到Google柒返次
https://youtu.be/K_Hka8208Y0
2023-02-09 02:34:15
2023-02-09 02:36:09
2023-02-09 08:14:37
2023-02-09 11:21:45
玩緊Quora個apps,速度快過網頁版ChatGPT好多 (https://apps.apple.com/hk/app/poe-fast-helpful-ai-chat/id1640745955)
問完佢之後,佢會建議你問其他follow up questions
例如我問佢去巴黎旅行有咩景點,佢會建議我問埋「巴黎鐵塔的開放時間是甚麼?」「請介紹一下羅浮宮的歷史背景。」

Suggested follow up questions依樣嘢有潛力放廣告,可以引導用家去問住宿、餐廳等等

陰謀論去諗,suggested follow up questions依樣嘢仲可以引導用家嘅思考方向。
可以引導你去諗某樣嘢 或者 引導你唔去諗某樣嘢
2023-02-09 11:37:48
啱啱問左GPT
入question limit 係2048 tokens

In the context of NLP (Natural Language Processing), a token is a sequence of characters that represents a single semantic unit in the text. Tokens are typically created by splitting the text into individual words or subwords, which are then used as the basic units of processing. For example, in the sentence "I like to play soccer," the individual words "I," "like," "to," "play," and "soccer" would each be considered a separate token.


仲叫佢寫個python script to count token
def count_tokens(text):
    tokens = text.split()
    return len(tokens)

text = "This is an example sentence."

print("Number of tokens:", count_tokens(text))
吹水台自選台熱 門最 新手機台時事台政事台World體育台娛樂台動漫台Apps台遊戲台影視台講故台健康台感情台家庭台潮流台美容台上班台財經台房屋台飲食台旅遊台學術台校園台汽車台音樂台創意台硬件台電器台攝影台玩具台寵物台軟件台活動台電訊台直播台站務台黑 洞