조회수 97

Traceback (most recent call last):

      File "C:\GIT_REPO\stable-diffusion-webui\modules\call_queue.py", line 57, in f

        res = list(func(*args, **kwargs))

      File "C:\GIT_REPO\stable-diffusion-webui\modules\call_queue.py", line 32, in f

        shared.state.begin(job=id_task)

      File "C:\GIT_REPO\stable-diffusion-webui\modules\shared_state.py", line 126, in begin

        devices.torch_gc()

      File "C:\GIT_REPO\stable-diffusion-webui\modules\devices.py", line 81, in torch_gc

        torch.cuda.empty_cache()

      File "C:\GIT_REPO\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\memory.py", line 159, in empty_cache

        torch._C._cuda_emptyCache()

    RuntimeError: CUDA error: out of memory

    CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.

    For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

    Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

---

Traceback (most recent call last):

  File "C:\GIT_REPO\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict

    output = await app.get_blocks().process_api(

  File "C:\GIT_REPO\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api

    result = await self.call_function(

  File "C:\GIT_REPO\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function

    prediction = await anyio.to_thread.run_sync(

  File "C:\GIT_REPO\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync

    return await get_asynclib().run_sync_in_worker_thread(

  File "C:\GIT_REPO\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread

    return await future

  File "C:\GIT_REPO\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run

    result = context.run(func, *args)

  File "C:\GIT_REPO\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper

    response = f(*args, **kwargs)

  File "C:\GIT_REPO\stable-diffusion-webui\modules\call_queue.py", line 77, in f

    devices.torch_gc()

  File "C:\GIT_REPO\stable-diffusion-webui\modules\devices.py", line 81, in torch_gc

    torch.cuda.empty_cache()

  File "C:\GIT_REPO\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\memory.py", line 159, in empty_cache

    torch._C._cuda_emptyCache()

RuntimeError: CUDA error: out of memory

CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.

For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

# StableDiffusion
# Error

그외 3명


총 리워드171.93 LM