【自己幫影片換主角】FakeApp 教學

挽歌之聲

971 回覆
546 Like 50 Dislike
挽歌之聲 2018-01-29 19:08:52
前日做左一段覺得少少似.唔小心del folder del 埋
溫酒斬華佗 2018-01-29 19:10:02
唔係時間耐就一定會低落去
要睇番夠唔夠素材比佢train,唔夾就唔夾
train到咁上下就會停左
挽歌之聲 2018-01-29 19:11:34
素材真係一個超大問題
Nocchi.bb喔 2018-01-29 19:12:53
訓一晚等佢LOAD算
洛櫻樓 2018-01-29 19:15:57
ResourceExhaustedError (see above for traceback): OOM when allocating tensor wit
h shape[2,512,9,9]
[[Node: model_1/conv2d_4/convolution = Conv2D[T=DT_FLOAT, data_format="
NHWC", padding="SAME", strides=[1, 2, 2, 1], use_cudnn_on_gpu=true, _device="/jo
b:localhost/replica:0/task:0/device:GPU:0"](model_1/leaky_re_lu_3/sub, conv2d_4/
kernel/read)]]
[[Node: loss_1/mul/_351 = _Recv[client_terminated=false, recv_device="/
job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replic
a:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1601_loss
_1/mul", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:C
PU:0"]()]]

[5960] Failed to execute script train

咩事
LD50_iv 2018-01-29 19:30:52
你用咩卡? 我1080ti load左二十個hr都係0.017
挽歌之聲 2018-01-29 19:31:58
It's out of memory, try lowering layers or nodes.

reddit話係咁
今日我生日 2018-01-29 19:34:08
Lm
普西佛齋娜 2018-01-29 19:40:09
玩撚猿崇煥 2018-01-29 19:44:44
run緊吉澤同stephy
霸氣呃蝦條 2018-01-29 19:44:48
想知整咗一次model之後del咗data b舊圖(結衣bb)換第2d圖(結衣bb 其他相)嘅話
喺加新嘢落model定當重新整過?
玩撚猿崇煥 2018-01-29 19:47:10
咁你A係咪同一個先
霸氣呃蝦條 2018-01-29 19:49:28
宜家喺同一個,因為b嘅data好似未夠,想加多d相再train好啲
但遲下a會換第2條影片
咁你想點姐 2018-01-29 19:51:53
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[5,5,512,1024]
[[Node: training/Adam/Variable_6/Assign = Assign[T=DT_FLOAT, _class=["loc:@training/Adam/Variable_6"], use_lock
ing=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](training/Adam/Variable_6, trainin
g/Adam/Const_8)]]

[5388] Failed to execute script train

次次去到TRAIN都係咁 樓主救求
霸氣呃蝦條 2018-01-29 19:53:17
理論上喺咪只要b喺同一個人都train得好快?
洛櫻樓 2018-01-29 19:56:42
2018-01-29 19:52:26.762355: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu
\PY\35\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc
) ran out of memory trying to allocate 1.13GiB. The caller indicates that this i
s not a failure, but may mean that there could be performance gains if more memo
ry is available.
Loss: 0.162298 0.13392
Printing config file to c:\fakes\model\config.p
Saving model weights
Traceback (most recent call last):
File "train.py", line 138, in <module>
ValueError: cannot reshape array of size 294912 into shape (4,7,3,64,64,3)
[788] Failed to execute script train

又有新問題
威威獅令 2018-01-29 19:57:58
留名睇杜小喬
玩撚猿崇煥 2018-01-29 20:00:40
咁冇問題 可以繼續 唔刪model就得
霸氣呃蝦條 2018-01-29 20:01:56
就咁加相去data b定喺del咗data b舊相先?
玩撚猿崇煥 2018-01-29 20:05:58
depends 如果本身load到好低 咁可以刪
其實刪唔刪都得
吹水台自選台熱 門最 新手機台時事台政事台World體育台娛樂台動漫台Apps台遊戲台影視台講故台健康台感情台家庭台潮流台美容台上班台財經台房屋台飲食台旅遊台學術台校園台汽車台音樂台創意台硬件台電器台攝影台玩具台寵物台軟件台活動台電訊台直播台站務台黑 洞