Print(torch.cuda.is_available()) closes the Python command prompt

I’m trying to enter this to check if they’re compatible with my system but upon entering the third line it just closes the prompt without giving an output or error.

import torch
print(torch.version)
print(torch.cuda.is_available())

The only error info I can find is from the Event Viewer which are these two

Event 1000, Application Error
Faulting application name: python.exe, version: 3.10.9150.1013, time stamp: 0x638fa05d
Faulting module name: nvcuda64.dll, version: 31.0.15.3129, time stamp: 0x640826be
Exception code: 0xc0000409
Fault offset: 0x000000000053c834
Faulting process id: 0x2af8
Faulting application start time: 0x01d9587b425c6f40
Faulting application path: C:\Users\Tampa\AppData\Local\Programs\Python\Python310\python.exe
Faulting module path: C:\WINDOWS\system32\DriverStore\FileRepository\nv_dispi.inf_amd64_059948e396d205d5\nvcuda64.dll
Report Id: 07e17557-8856-46b1-884c-14add9ca30ea
Faulting package full name:
Faulting package-relative application ID:

Event 1001, Windows Error Reporting
Fault bucket 1843125454459915623, type 5
Event Name: BEX64
Response: Not available
Cab Id: 0

Problem signature:
P1: python.exe
P2: 3.10.9150.1013
P3: 638fa05d
P4: nvcuda64.dll
P5: 31.0.15.3129
P6: 640826be
P7: 000000000053c834
P8: c0000409
P9: 0000000000000005
P10:

Attached files:
\?\C:\ProgramData\Microsoft\Windows\WER\Temp\WERAEA.tmp.dmp
\?\C:\ProgramData\Microsoft\Windows\WER\Temp\WERB68.tmp.WERInternalMetadata.xml
\?\C:\ProgramData\Microsoft\Windows\WER\Temp\WERB79.tmp.xml
\?\C:\ProgramData\Microsoft\Windows\WER\Temp\WERB77.tmp.csv
\?\C:\ProgramData\Microsoft\Windows\WER\Temp\WERB97.tmp.txt

These files may be available here:
\?\C:\ProgramData\Microsoft\Windows\WER\ReportArchive\AppCrash_python.exe_71b197582739d23e413f38df2bba3b7d65e86_86ac2e64_099d44c8-e087-45bb-96b2-71a267320715

Analysis symbol:
Rechecking for solution: 0
Report Id: 07e17557-8856-46b1-884c-14add9ca30ea
Report Status: 268435456
Hashed bucket: 6ff48c6f696031cd999418d5e5904d67
Cab Guid: 0

I used Python 3.10.9 and 3.10.6, Torch 1.13.1+cu117, Windows 10, GTX 1660 Super. Sorry if this is the wrong place to be asking for help, I’m still very new to all of this.

Could you try to update to the current 2.0.0 release using the CUDA 11.7 or 11.8 runtime and check if you would see the same error?
Also, was PyTorch ever working on your setup or is this a new machine?