Backend 你想用乜language 寫都得,只要你同倒browser communicate 就得,即係嗰program 係一個http server 就得。
Python/Node.js 係會用多啲ram 嘅,cpu 就其實唔會用太多,同埋而家大部份interpret language 都有JIT 㗎啦,其實唔會太慢,佢會慢係因為within 隻language 本身,會做多好多某程度上不必要嘅malloc + memcpy,同埋佢地有好多for reflection 嘅嘢,呢啲嘢係會拎佢地bulky咗 但就會令隻language 多咗啲好用嘅特性,令啲expression可以好簡單咁寫倒。不過而家啲server 係勁咗嘅,所以都可以話係用呢點trade off 返development cost。
我想提出一點,其實language 易寫同易睇係好重要,就算佢慢啲都冇乜所謂。同埋好多時唔洗一諗就諗performance 問題,只要你唔係用啲太慢嘅algo,同埋你嗰service 嘅traffic真係好大, 你都唔洗驚,只要嗰deployment 諗定auto scaling 就得,與其擔心server 唔夠快,擔心嗰server 冇人用好過啦
又順便講埋flask performance 唔好嘅問題,其實你撘waitress 去用,佢會開thread 同你行你個flask app, 其實都唔會太差嘅,本身flask 嗰doc 都係叫你撘啲production WSGI server 用嘅其實。你真係好想用asyncio, 咁你未用fastapi
又有人問web 加 ML 點算,fastapi 本身就係develop for 呢個use case, 但其實因為係backend 嘅關係,其實你想用乜language 寫都得。同埋Pytorch/tensorflow 係方便你砌model 同train model 啫,train 完你可以export 嗰model 去whatever format for inference, ONNX 係其中一個幾多language support 嘅format