Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

throw errors in windows #3

Open
Rohitsalota11 opened this issue Nov 20, 2018 · 3 comments
Open

throw errors in windows #3

Rohitsalota11 opened this issue Nov 20, 2018 · 3 comments

Comments

@Rohitsalota11
Copy link

when i run this command:-
train.sh 2
on my git bash it produces following results:-
Namespace(batch_size=16, bits=8, checkpoint_iters=10000, clip=0.5, decoder_fuse_ level=1, distance1=1, distance2=2, encoder_fuse_level=1, eval='data/eval', eval_ batch_size=1, eval_iters=4500, eval_mv='data/eval_mv', fuse_encoder=True, gamma= 0.5, gpus='0', iterations=10, load_iter=None, load_model_name=None, lr=0.00025, max_train_iters=100, model_dir='model', num_crops=2, out_dir='output', patch=64, save_codes=False, save_model_name='demo', save_out_img=True, schedule='50000,60 000,70000,80000,90000', shrink=2, stack=True, train='data/train', train_mv='data /train_mv', v_compress=True, warp=True)

Creating loader for data/train...
448 images loaded.
distance=1/2
Loader for 448 images (28 batches) created.
Encoder fuse level: 1
Decoder fuse level: 1
Namespace(batch_size=16, bits=8, checkpoint_iters=10000, clip=0.5, decoder_fuse_level=1, distance1=1, distance2=2, encoder_fuse_level=1, eval='data/eval', eval_batch_size=1, eval_iters=4500, eval_mv='data/eval_mv', fuse_encoder=True, gamma=0.5, gpus='0', iterations=10, load_iter=None, load_model_name=None, lr=0.00025, max_train_iters=100, model_dir='model', num_crops=2, out_dir='output', patch=64, save_codes=False, save_model_name='demo', save_out_img=True, schedule='50000,60000,70000,80000,90000', shrink=2, stack=True, train='data/train', train_mv='data/train_mv', v_compress=True, warp=True)

Creating loader for data/train...
448 images loaded.
distance=1/2
Loader for 448 images (28 batches) created.
Encoder fuse level: 1
Decoder fuse level: 1
Traceback (most recent call last):
File "", line 1, in
Traceback (most recent call last):
File "train.py", line 111, in
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 105, in spawn_main
for batch, (crops, ctx_frames, _) in enumerate(train_loader):
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\utils\data\dataloader.py", line 501, in iter
exitcode = _main(fd)
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 114, in _main
return _DataLoaderIter(self)
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\utils\data\dataloader.py", line 289, in init
prepare(preparation_data)
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 225, in prepare
w.start()
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\multiprocessing\process.py", line 105, in start
_fixup_main_from_path(data['init_main_from_path']) self._popen = self._Popen(self)

File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\multiprocessing\context.py", line 223, in _Popen
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\multiprocessing\context.py", line 322, in _Popen
run_name="mp_main")
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 263, in run_path
return Popen(process_obj)
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\multiprocessing\popen_spawn_win32.py", line 65, in init
reduction.dump(process_obj, to_child)
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\multiprocessing\reduction.py", line 60, in dump
pkg_name=pkg_name, script_name=fname)
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 96, in _run_module_code
ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "G:\video compression\pytorch-vcii-master\train.py", line 111, in
for batch, (crops, ctx_frames, _) in enumerate(train_loader):
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\utils\data\dataloader.py", line 501, in iter
return _DataLoaderIter(self)
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\utils\data\dataloader.py", line 289, in init
w.start()
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\multiprocessing\popen_spawn_win32.py", line 33, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\Users\rohit\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

I tried running train.py with all the necessary parameters like train,eval,distance1, distance2. but still no lead.

@roychen1998
Copy link

I met the same issue , here is the link FYI. pytorch/pytorch#5858

@MubarkLa
Copy link

I met the same issue , here is the link FYI. pytorch/pytorch#5858
@roychen1998

Hello, Thank a lot for your reply. I have carefully read that issue but still faces the same problem.
I am trying to tun the UCF101_3DCNN file in https://github.com/HHTseng/video-classification/tree/master/Conv3D and I added

def run():
torch.multiprocessing.freeze_support()
print('loop')

if name == 'main':
run()

in the beginning of UCF101_3DCNN file. But I still have the same problem:

C:\Anaconda3\envs\pytorch1\python.exe D:/LSTM/study/video-classification-master/CRNN/UCF101_CRNN.py
loop
C:\Anaconda3\envs\pytorch1\lib\site-packages\sklearn\preprocessing_encoders.py:415: FutureWarning: The handling of integer data will change in version 0.22. Currently, the categories are determined based on the range [0, max(values)], while in the future they will be determined based on the unique values.
If you want the future behaviour and silence this warning, you can specify "categories='auto'".
In case you used a LabelEncoder before this OneHotEncoder to convert the categories to integers, then you can now use the OneHotEncoder directly.
warnings.warn(msg, FutureWarning)
C:\Anaconda3\envs\pytorch1\lib\site-packages\sklearn\preprocessing_encoders.py:415: FutureWarning: The handling of integer data will change in version 0.22. Currently, the categories are determined based on the range [0, max(values)], while in the future they will be determined based on the unique values.
If you want the future behaviour and silence this warning, you can specify "categories='auto'".
In case you used a LabelEncoder before this OneHotEncoder to convert the categories to integers, then you can now use the OneHotEncoder directly.
warnings.warn(msg, FutureWarning)
Traceback (most recent call last):
File "", line 1, in
File "C:\Anaconda3\envs\pytorch1\lib\multiprocessing\spawn.py", line 105, in spawn_main
Traceback (most recent call last):
File "D:/LSTM/study/video-classification-master/CRNN/UCF101_CRNN.py", line 217, in
exitcode = _main(fd)
File "C:\Anaconda3\envs\pytorch1\lib\multiprocessing\spawn.py", line 114, in _main
train_losses, train_scores = train(log_interval, [cnn_encoder, rnn_decoder], device, train_loader, optimizer, epoch)
File "D:/LSTM/study/video-classification-master/CRNN/UCF101_CRNN.py", line 61, in train
for batch_idx, (X, y) in enumerate(train_loader):
File "C:\Anaconda3\envs\pytorch1\lib\site-packages\torch\utils\data\dataloader.py", line 278, in iter
prepare(preparation_data)
File "C:\Anaconda3\envs\pytorch1\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Anaconda3\envs\pytorch1\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
return _MultiProcessingDataLoaderIter(self)
File "C:\Anaconda3\envs\pytorch1\lib\site-packages\torch\utils\data\dataloader.py", line 682, in init
run_name="mp_main")
File "C:\Anaconda3\envs\pytorch1\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Anaconda3\envs\pytorch1\lib\runpy.py", line 96, in _run_module_code
w.start()
File "C:\Anaconda3\envs\pytorch1\lib\multiprocessing\process.py", line 105, in start
mod_name, mod_spec, pkg_name, script_name)
File "C:\Anaconda3\envs\pytorch1\lib\runpy.py", line 85, in _run_code
self._popen = self._Popen(self)
File "C:\Anaconda3\envs\pytorch1\lib\multiprocessing\context.py", line 223, in _Popen
exec(code, run_globals)
File "D:\LSTM\study\video-classification-master\CRNN\UCF101_CRNN.py", line 217, in
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Anaconda3\envs\pytorch1\lib\multiprocessing\context.py", line 322, in _Popen
train_losses, train_scores = train(log_interval, [cnn_encoder, rnn_decoder], device, train_loader, optimizer, epoch)
File "D:\LSTM\study\video-classification-master\CRNN\UCF101_CRNN.py", line 61, in train
for batch_idx, (X, y) in enumerate(train_loader):
File "C:\Anaconda3\envs\pytorch1\lib\site-packages\torch\utils\data\dataloader.py", line 278, in iter
return Popen(process_obj)
File "C:\Anaconda3\envs\pytorch1\lib\multiprocessing\popen_spawn_win32.py", line 65, in init
reduction.dump(process_obj, to_child)
File "C:\Anaconda3\envs\pytorch1\lib\multiprocessing\reduction.py", line 60, in dump
return _MultiProcessingDataLoaderIter(self)
File "C:\Anaconda3\envs\pytorch1\lib\site-packages\torch\utils\data\dataloader.py", line 682, in init
ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe
w.start()
File "C:\Anaconda3\envs\pytorch1\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Anaconda3\envs\pytorch1\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Anaconda3\envs\pytorch1\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Anaconda3\envs\pytorch1\lib\multiprocessing\popen_spawn_win32.py", line 33, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Anaconda3\envs\pytorch1\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\Anaconda3\envs\pytorch1\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

Process finished with exit code 1

It seems that the same problem cannot be solved by this method. Could you please kindly help solve this?

@ekremcet
Copy link

ekremcet commented Feb 21, 2020

Hi, I had the same problem in Windows as well. I think this is caused by the forking mechanism by windows. So to fix that please add

if __name__ == '__main__':

in line 110 and move lines 111-225 into this indentation in train.py. So train.py should look like this:

if __name__ == '__main__':
    while True:

        for batch, (crops, ctx_frames, _) in enumerate(train_loader):
            scheduler.step()
            train_iter += 1
        ....
        ....
        if train_iter > args.max_train_iters:
          print('Training done.')
          break

Hope this fixes the issue for you as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants