Dual-Booting with Ubuntu

My Science Fair Journey, Part 2

What was discussed last week…

  • Moving files and folders around using Python libraries pathlib and shutil

Tuesday, October 7th

For context, my project is about biomedical engineering.

After getting my new (and first) desktop PC, I decided to try to dual-boot it with Windows and Ubuntu, a version of Linux (technically called a “distro”). Why? While I like Windows (def not favoritism), Ubuntu is necessary to test a certain publicly-available pre-trained AI model that I plan to use as a reference point when training my own version of that model. Essentially, after getting results and the “error metrics” from how well the pre-trained model did when given certain biomedical data, my goal then would be to train a model to surpass the accuracy of that of the pre-trained model’s.

Of course, Ubuntu didn’t work perfectly for the first time, so I had to turn my desktop PC off and on again, and then again, and then again…

Wednesday, October 8th

I was enlightened today by how I really learned how dual-booting on my PC works! When you introduce a new OS (operating system) to you PC via a “flashed” USB that can install the new OS, you can choose where you install the new OS: on your PC’s SSD (Solid-State Drive/Disk), or in the USB Drive itself, but people generally install the new OS on their PC’s storage.

Notice how in this case, nothing changed about what the USB Drive contained: this is the general method to dual-boot a PC.

Saturday, October 11th

After spending a few nights (and many hours) on just trying to get a pre-trained model (basically a ready-to-go model that can can be tested right away), I managed to train the model for the first time successfully!

Here were some of the problems I ran into:

  1. File structures: The pre-trained model was using the nnUNet (v1) architecture, a prominent type of neural network model for tasks such as classifying images and separating different regions of an image (otherwise known as segmentation). An architecture is basically the “skeleton” of how a neural network model is organized. This architecture required a very specific organization of which files (e.g. data) should be in which folders. If I messed up the file structure one bit, the model wouldn’t be able to find the correct files, and wouldn’t work.

  2. My new PC vs old libraries: Libraries in programming are pre-built and reusable code that can execute a certain task reliably, whether it be in robotics, game development, or in my case, AI. After getting my pre-built PC with an RTX 5060, as of October 2025, some libraries such as PyTorch didn’t have versions that were compatible for such new graphics cards. Consequently, I had to downgrade certain libraries, and in some cases, use a “nightly build” of some libraries. A “nightly build” is a generally untested version of a software, which means that even though it’s one of the newest versions, it could be unstable.

Luckily, I got it to work! (at 1am on a Saturday…)

Lessons Learned

  • Dual-Booting doesn’t install the actual OS onto a removeable USB drive, but it only has an installer.

  • Trying to use different versions from different times, whether they be software or hardware, can be hard, and requires workarounds.