CST334 Week 5

This week’s module was all about concurrency, threads, locks and also how locks can be a part of real data structure. Our main focus of the discussion was: basics of concurrency, the Thread API, Locks, and Lock-Based Data Structures. When I first read the four readings, I found it really difficult to see how all of them were connected but later I understood very well that each one is built upon the previous one.

The concept of concurrency presented a single physical CPU being capable of acting like many virtual CPUs which, in turn, allows for the running of multiple threads in a program simultaneously. I got the concept that a thread is something like a mini-process but the difference is that it shares the same address space with other threads. That much was clear to me but I needed to go over the part regarding context switching between threads once again since I was not completely sure about how the registers for each thread are saved and restored.

The Thread API chapter helped clear that up by showing how threads are created in practice using pthread_create(). The arguments for the function were a little confusing at first, especially the pointer to the start routine and the arg parameter, but once I pictured each thread starting at its own function, it clicked for me.

Locks were the trickiest topic for me this week. Understanding why we need them was easy, without locks, two threads updating the same variable will break everything, but wrapping my head around how locks “protect” a critical section took more time. Seeing the example of incrementing a shared balance helped. The idea that a lock is either available or held made it simpler.

Finally, lock-based data structures tied everything together. It showed how adding locks to something like a counter can make the structure thread-safe, but also how this can impact performance. My “aha moment” this week was realizing that the smallest data structure can become unsafe once multiple threads access it at the same time.

Overall, this week connected strongly to what I already knew about race conditions and shared memory. It made me think about how these concepts will matter later when we get deeper into virtualization and scheduling. My biggest question now is how operating systems implement more advanced locking strategies and how they reduce performance slowdowns caused by too much locking.


Comments

Popular posts from this blog

Week 4 - Goals!

Week 2 - Second week of class!

Week 4 - Interview with Industry Expert