IT討論區(80) 做連卡佛真係好幸福,有得減薪50%,IT狗就已被炒,減薪100%

1001 回覆
8 Like 0 Dislike
2020-04-17 17:54:25
2020-04-17 17:59:05
2020-04-17 18:00:31

最on9係你寫好左 然後腦細話做唔到唔好做
2020-04-17 18:04:49
學了新野
不過似乎avoid cache pollution要好specific既scenario先有明顯用途
反而順帶睇到prefetch好似多用途啲
2020-04-17 18:07:11
it狗真係好小氣身位it狗弱勢社群更加要團結一致同心同氣
2020-04-17 18:07:40
有時logistics 都有90% acc
train nn都係麻煩
又冇得real time update model
2020-04-17 18:11:42
webassembly 就係解決緊js
2020-04-17 18:11:58
十年後係每種Lang都會直接compile towasm,我py寫frontend既時代來臨

btw,如果係咁,c#上位
2020-04-17 18:15:38
唔睇好webassembly, frontend 現成plugin 會好多問題,人係有惰性,始終會prefer圍內共用一種開發語言
2020-04-17 18:21:05
係 不過js暫時可以再戰十年
2020-04-17 18:21:17
我更改下我個point
應該話 無得直接access dom 可能會加layer去access dom 但係反正就係都係慢
因為太多可以出錯既地方 尤其係javascript堆macro micro task 對browser/framework vendor黎講 最方便同安全既做法 就係將對dom既access align with條task queue aka搞個wrapper for wasm

Web vr係另一個topic 咁啱我又可以吹下

個計算固然好heavy 但係我諗個趨勢唔會係wasm 而係你pass個matrix同埋shader比api
另外最困難既 其實係input
Motion sensor同camera呢啲 到底應該用咩secure policy去比個webpage去access呢啲資訊 如果個user唔小心grant左個permission比malicieus webpage個時 點樣minimize loss嘅啲
2020-04-17 18:22:00
2020-04-17 18:24:13
未必
如果你無講到明compile with sse既就肯定無
2020-04-17 18:25:04
No code=no bug
2020-04-17 18:27:53
One reason why we want to bypass the caches is that we know the data we are processing is not going to be reused soon. For example, we need to copy hundreds of MBs from one array to another. The size of the data is much bigger than the caches. Typically L1 is ~64K, L2 is a few hundred KBs and L3 is a few MBs. If the copying goes through the caches, at the end of copying last parts of the arrays will flood all levels of cache, and evict other data that may have better locality than the arrays. In other words, our streaming data pollute the caches. To avoid this, many modern CPUs support non-temporal load and store. Note that the CPU may still use a very small buffer to support non-temporal memory operation, the buffer looks like a very tiny dedicated cache. So you are not really writing to RAM from the register. This is done so that a CPU can combine writes to fully utilize the memory bus. Non temporal memory read is the same.

If you really don't want any buffering, you can change cacheability attributes in your memory mapping (i.e. page table entries). This is usually done by the OS for device drivers.
2020-04-17 18:33:40
blazor 好似係一個.net runtime on top of wasm,姐係唔係compile down to wasm
2020-04-17 18:39:38
Have you ever implemented your own memcpy() at work?

There are other tricks you can play with caches. For example some CPUs have instructions that create cache lines out of thin air.

https://www.ibm.com/support/knowledgecenter/en/ssw_aix_71/assembler/idalangref_dcbz_instrs.html

The IBM architecture is nice to specify the cache line contents to be all zeros. I have used a similar instruction on another CPU that creates a cache line with garbage (previous data of the cache line). Garbage is okay for destination of copy.

The reason for these instructions to exist is to avoid reading a cache line that is going to be overwritten soon. This is the case for a write-back cache.
2020-04-17 18:40:55
Yes, but very rarely you still want to do it by hand.
2020-04-17 18:42:46
咁都要個model 準先先嫁
吹水台自選台熱 門最 新手機台時事台政事台World體育台娛樂台動漫台Apps台遊戲台影視台講故台健康台感情台家庭台潮流台美容台上班台財經台房屋台飲食台旅遊台學術台校園台汽車台音樂台創意台硬件台電器台攝影台玩具台寵物台軟件台活動台電訊台直播台站務台黑 洞