Posts

CST334 Week 7

This week’s module focused on how operating systems manage I/O devices, hard drives, and the basics of file systems. The main topics we covered were I/O devices, the structure and performance of hard disk drives, files and directories, and finally how a simple file system is implemented. All of these tied back to the bigger idea of how the OS communicates with hardware while keeping everything organized, reliable, and efficient. We discussed I/O devices, where we distinguished between block devices and character devices and the way the OS communicates with both categories. The discussion revealed that disks, keyboards, and USB peripherals, for instance, differed markedly in their performance requirements. I grasped the main ideas but had to struggle a little bit with the concept of hierarchical buses and the reason behind faster devices being nearer to the CPU. It dawned on me that it’s all about minimizing latency and maximizing data transfer rate, thus the concept clicked. Next, we w...

CST334 Week 6

The week was primarily about concurrency, and among the most important things that we studied were condition variables, semaphores, implementation of a bounded buffer, synchronization barriers, the Anderson/Dahlin method, and some typical bugs in concurrency. It is true that all these topics are related, but they all deal with thread coordination in different manners. Condition variables made sense to me once I realized they basically act like a “waiting room” for threads. A thread can go to sleep until some condition becomes true, and another thread signals it when it’s time to move forward. Semaphores felt similar at first, but I learned they’re more like counters that let a certain number of threads access something. The difference between binary and counting semaphores stood out to me. Binary feels like a simple lock, while counting semaphores allow more flexibility depending on the resource. The hardest part this week was keeping the differences between locks and condition variabl...

CST334 Week 5

This week’s module was all about concurrency, threads, locks and also how locks can be a part of real data structure. Our main focus of the discussion was: basics of concurrency, the Thread API, Locks, and Lock-Based Data Structures. When I first read the four readings, I found it really difficult to see how all of them were connected but later I understood very well that each one is built upon the previous one. The concept of concurrency presented a single physical CPU being capable of acting like many virtual CPUs which, in turn, allows for the running of multiple threads in a program simultaneously. I got the concept that a thread is something like a mini-process but the difference is that it shares the same address space with other threads. That much was clear to me but I needed to go over the part regarding context switching between threads once again since I was not completely sure about how the registers for each thread are saved and restored. The Thread API chapter helped clear...

CST334 Week 4

This week’s module covered a lot about how memory virtualization actually works, and honestly, it tied together a lot of things that were confusing before. The big topics were free-space management, paging, swapping, and translation lookaside buffers, TLB’s. Even though these ideas are all different, I started noticing how they connect to each other as part of a bigger system. Free-space management made the most sense right away because the examples clearly showed how fragmentation can mess everything up. It also helped explain why paging uses fixed-size units. Then we moved into TLBs, and that was one of my “aha” moments. Thinking of a TLB as basically a super fast “cheat sheet” that saves recent translations made the whole idea of speeding up memory access feel a lot more real. Multi-level paging was one of the topics I had to reread a couple times. The part that tripped me up was how page tables get so massive that you literally can’t store them as one giant table. Once I realized m...

CST334 Week 3

This week’s module focused heavily on memory virtualization, and honestly, this has been the first week where all the topics felt like they finally connected into one big picture. We covered address spaces, the C memory API, address translation, base-and-bounds, segmentation, and paging. At first, each of these topics felt separate, but the deeper I got into the readings and the PA3 instructions, the more I started seeing how they all fit together under the idea of how the OS makes memory feel simple even though the hardware underneath is anything but. One of the clearest ideas this week was the concept of an address space. I always understood virtual vs. physical memory at a high level, but reading OSTEP 13 made it click that an address space is basically the illusion the OS gives each process, its own clean slate, even though everything is ultimately sharing the same physical RAM. That idea tied directly into the C Memory API, because malloc, free, and even pointers all rely on the O...

CST334 Week 2

 During this week's class, the operating system's capability of managing multiple programs simultaneously was the main point of our discussion. The key point was the process concept, which can simply be defined as an executing program. I was not aware of the number of programs that my computer was "running" simultaneously, and thus it was fun to find out that the OS creates the illusion of having several CPUs through the continuous process switching. Physically, there may exist only one CPU, however, the impression is that everything is happening at the same time. The Process API was also a topic of discussion, mainly the fork(), exec(), and wait() functions. The fork() function, to be honest, was the most confusing to me in the beginning, as it creates a duplicate of the program and the parent and child process go on executing from the same point. This concept was a bit difficult for me to comprehend but when I saw examples, it became clearer. On the contrary, exec()...

CST334 Week1

Week One: This week was a good introduction to how computers really work behind the scenes. I’ve used operating systems my whole life without ever thinking about what’s actually going on when I open an app or run a program. The fact that the OS is the connection point between hardware and software made me realize why it is so important to speak about it. It manages memory, controls processes, handles input and output, and basically keeps everything running smoothly. Without an OS, a computer would just be a pile of hardware that doesn’t know what to do. Beside that, we took a look at computer architecture too which described the interaction of CPU, memory, and storage. I had a rough understanding of the functions of these components already, but this week was enlightening for me. It was like a visualization of the data flow between the processor and the memory, and the instruction execution was like a fine step by step movement of the machine. The lesson about binary, decimal, and hexa...