OPERATING SYSTEM DESIGN AND IMPLEMENTATION PDF

adminComment(0)

Operating Systems Design and Implementation, Third Edition. By Andrew S. Tanenbaum - Vrije Universiteit Amsterdam, The Netherlands, Albert S. Woodhull. Study Minix Operating System. Contribute to /study-minix-os development by creating an account on GitHub. Request PDF on ResearchGate | Operating systems: design and implementation / Andrew S. Tanenbaum | Incluye bibliografía.


Operating System Design And Implementation Pdf

Author:ANNETTA TENNISON
Language:English, German, Portuguese
Country:Bahamas
Genre:Science & Research
Pages:124
Published (Last):21.04.2016
ISBN:401-3-27530-221-4
ePub File Size:16.63 MB
PDF File Size:13.58 MB
Distribution:Free* [*Registration Required]
Downloads:30805
Uploaded by: ULYSSES

Deeper understanding of OS design and implementation principles: OS/ architecture interface/interaction. Current trends in OS Research - with a focus on . Introduction to Operating System Design and Implementation The OSP 2 Approach Michael Kifer, PhD State University of New York at Stony Brook, NY, USA. The first € price and the £ and $ price are net prices, subject to local VAT. Prices indicated with * include VAT for books; the €(D) includes 7% for. Germany, the.

TaskCB", "Hello World! The format of the where argument is the same as before. This method can be used to halt execution of OSP 2 when a bug is discovered; further execution of OSP 2 under these circumstance is probably not useful under the circumstances. The error method also causes a stack trace and the current OSP 2 snapshot to be included in the log for debugging purposes.

Organization of OSP 2 14 Similar to print except that a warning message is printed to the log. Unlike error and checkCondition but like print , the execution of OSP 2 can proceed after this method is called. Like method error , a snapshot and a stack trace are included in the system log.

Null, if the object is not found. The current item is set by the enumerators see below as they traverse the list after each call to nextElement. Null, if the list is empty. A forward iterator returns an object of class Enumeration a standard Java class , which can then be used to conveniently traverse the list. For instance, GenericList list; The current pointer is the point of insertion for the previously described methods appendToCurrent and prependAtCurrent.

This interface mandates only the methods that OSP 2 itself uses internally. When one threads needs to communicate with another, it sends a message and might decide to block itself until a response arrives. In a typical operating system, events are represented by some kind of event data structure.

In OSP 2 , an event is an object and such an announcement is made by executing the notifyThreads method associated with the event. As a result, threads waiting on the event are unblocked by the operating system and can continue their execution.

In OSP 2 , events are represented by the Event class. A basic event has an id, which serves to distinguish this event from other events and a waiting queue. Thus, an event provides the means for suspending threads when they have to wait, and subsequently locating them when they are to be resumed.

In practice, the Event class is almost always subclassed before it is used. The Event class provides the methods necessary for maintaining the waiting queue, and these methods can be used on pages, ports, and IORBs when these are used in their capacity as events.

No checks are performed to ensure that the thread is not already on the queue. If the thread is not found, return silently. It is quite possible that some threads on the waiting queue have been destroyed while waiting. In this case, notifyThreads simply removes the destroyed threads from the queue as executing resume on such a thread would be an error. Several projects in OSP 2 make extensive use of events and we will refer back to this section when necessary.

Organization of OSP 2 18 1. In OSP 2 , such work might include proactive swapping out of dirty memory pages, as required by some memory-management algorithms, and deadlock detection. To use a daemon, one creates an object in a class that implements DaemonInterface and then registers this object with the system. For instance, in case of a deadlockdetection daemon, a method should be provided that executes the appropriate deadlock-detection algorithm.

This method is called by OSP 2 when it wakes up the daemon. This is typically done when OSP 2 begins executing, inside the init method that exists in the main class of each student package.

Here is an example of registering a daemon: Daemon. The third argument tells OSP 2 that the daemon should be periodically woken up after every 20, ticks. Typically the requirement to use daemons would be part of the assignment given out by your instructor, but you might also decide to use them on your own, based on your understanding of the problem. When your implementation of the classes in the project is complete, they should be compiled and linked with the OSP.

In fact, there is no reason to touch wgui. The only recommended way of doing this is to change the parameters through the GUI of the demo version of OSP 2 and then save the new parameters in a 1. A GUI panel that lets the user change the simulation parameters is shown in Figure 1. Figure 1. Java settings. This simply means that the environment variable PATH is set appropriately. For Windows, this variable should be set in the autoexec.

For Unix-based systems, the setting depends on the type of the shell used. We show the settings for the two most popular shells: bash and csh. Settings 1. To set the PATH variable for bash, place the following in the. The actual location of the Java executables can vary. OSP use. In this case, you might need to run OSP 2 as follows: java -classpath. OSP for Unix-based systems and java -classpath.

OSP for Windows. Compiling and running the project. Here is how to do this. On Unix-based systems, simply type make, and the project will be compiled.

Sometimes make clean; make can be helpful if you need to get rid of stale. Organization of OSP 2 If it does not say that this is GNU make or if it does not understand the --version argument, then it is not GNU make, and you should ask the system administrator if this version of the make-program is installed and under which name. If you cannot locate the appropriate make-program, read on.

The following commands can be used to compile and run OSP 2 on a Unixbased system: javac -g -classpath. OSP jdb -classpath. OSP 1.

Present Over Perfect: Leaving Behind Frantic for a Simpler, More Soulful Way of Living

In general, we have the following naming convention. There is an exception to this rule, namely the methods atError and atWarning , which are introduced below. These are best understood by considering the 1. The event is actually a call to one of the student methods although control must go through the IFL. Assume, for the sake of discussion, that the selected event is a call to the create thread method.

In this case, the event engine calls create in the IFL. The student implementation of do create performs the requested action. If the student code executed incorrectly, an error message is written to the simulation log and simulation halts. Assuming the student code executed correctly, simulation proceeds to the next scheduled event on the event queue. Students should therefore adhere to the following additional naming convention: Organization of OSP 2 28 When calling a method named name in this or another package, call the method name, i.

Note also that the student implementation should never directly refer to the classes defined in the IFL layer. Instance Methods When you receive a project assignment that contains the templates of the methods to be implemented, you will notice that some methods are static i.

For example, the method do dispatch is static in class ThreadCB, because it makes no sense to call it on any particular thread: On the other hand, methods do resume and do suspend in ThreadCB are not static: As usual in Java, the context object of a non-static method is accessible through the variable this. Therefore, when reading the description of each method in the project, it is important to be aware of whether this method is static or an instance method.

For each project, the student implementation may require services implemented in other parts of OSP 2 and must call the appropriate methods to obtain these services. Methods needed for one project, however, are not necessarily needed for another. In some cases, incorrect use of methods that belong to other packages might even corrupt the internal state of the system.

For example, the method isFree of class FrameTableEntry is available in project Memory, but it is obfuscated away and will cause a compilation error if it is used by methods in project Threads.

Graceful termination, however, is not always possible because OSP 2 is a multi-threaded application and termination of some of the active threads might depend on student code whose behavior cannot be predicted. It is therefore possible that, after printing an error message, OSP 2 may hang; in this case the system must be terminated by the user. This is nothing to worry about, however, as it does not indicate a problem with your program.

The reason for these exceptions is that when the designated simulation time runs out, OSP 2 tries hard to stop all the currently active Java threads. Unfortunately, it is not possible to terminate threads immediately, so a thread may continue to run for a short while even though some of the vital system objects may have already been destroyed.

In such situations, NullPointerException and other exceptions can occur. Perhaps, it is best to state what this manual is not: It is not intended to replace the textbook. It is not intended to teach you the basic concepts in operating systems. It is not intended to guide you every step of the way to the completion of 1.

Organization of OSP 2 30 your project. Instead, the description of a student project provides a complete description of the API that you can use to implement the project and a description of the functionality of each method in the project. The best advice is: If you are in doubt about whether or not it is appropriate for your implementation to take a certain action, consider whether you would like it if the OS on your desktop behaved this way.

For example, suppose you are implementing a thread scheduler and at certain point in the program you have to deal with the situation where no threads are left to schedule. Should you leave the CPU idle or create and run a dummy thread, thereby wasting computing resources? The answer should become obvious if you just ask yourself the simple question: These snapshots are primarily intended for performance checking and debugging. A snapshot contains a complete dump of main memory, the status of all page tables, the status of all threads, including the queues they are in, and the status of all communication ports.

This is a better measure of performance than the average turnaround time, and this statistic should be kept as high as possible but, of course, it cannot exceed 1. It should be noted that some entries in the system log can have fairly long lines, so to view the log it may be necessary to use a viewer with horizontal scroll capability. Most text editors provide this capability. Generally, errors in student code can be divided in two categories: Errors that cause Java exceptions.

Semantic errors, such as an incorrect action taken in response to a simulator request. Examples include the failure to maintain the correct status of a thread e. For example, a Java debugger can be used to determine where the exception NullPointerException has occurred. In all likelihood, Java exceptions are due to errors in student code. If an exception takes place in OSP 2 code, it does not necessarily mean that the student code is correct; rather, it likely means that OSP 2 has failed to catch the problem early enough to generate a meaningful error message to guide you to the real problem.

System log. When OSP 2 detects a semantic error, it tries to come up with as clear an explanation as possible. When an error or a warning is issued, 1. When OSP 2 terminates, it tells you if one of these conditions was encountered or if it terminated successfully.

In case of a problem, the best way to understand what might have happened is to trace back the messages in the system log. For instance, if an error message says that you are trying to dispatch a thread that is waiting on some event that has not occurred yet, you should trace back and see when the thread was suspended on that event and what was the sequence of events that happened since. You might discover, for example, that your program is placing threads on the ready queue that, in reality, are not ready to execute.

This is because OSP 2 cannot know what is actually happening inside student code and it is therefore necessary to put the execution of your program in the context of the overall execution of OSP 2. This can be achieved with the help of the methods in the class MyOut, which were discussed earlier.

Moreover, it is useful to keep in mind that the toString method of all major classes in OSP 2 is set up in a printer-friendly manner. For example, executing MyOut. ThreadCB] The Thread The header of the OSP 2 system log provides a brief explanation of the printable representation of various objects. For instance, in the above representation for a thread, Thread Error and warning hooks. In addition to MyOut, the main class of every student project has the following pair of methods: Normally, the bodies of these methods are empty, and this is how you should leave them when you submit your program.

However, during debugging you can put arbitrary code there. Most useful would be code that prints the status of the relevant variables in your program. Note that whenever a condition violation, error, or warning occurs, OSP 2 prints the full stack trace that indicates the sequence of method calls that led to the problem. System snapshot. OSP 2 also produces a system snapshot when a condition violation or error occurs.

The snapshot conveys the status of memory allocation, the status of each task and thread in the system, etc. This information can be compared with the status of the system per your implementation and the system log can be consulted to determine where the discrepancy arises.

When OSP 2 prints out a warning, no snapshot is added to the log by default. This is because warnings tend to come in large numbers and this can lead to an unmanageably large number of snapshots in the log. However, you can include the snapshot method of class MyOut in the body of the atWarning method of the main class of your project and produce a snapshot in this way.

Execution stack trace. Another important resource for debugging OSP 2 projects is the execution stack trace provided by the Java virtual machine when a Java exception occurs. Here is an example of such a trace: NullPointerException at osp. Activate EventCallback. ActivateChildren EventEngObj.

Activate EventEngObj. Going down the trace, we can see the sequence of method calls that led to the error: The most important information here is the line number where the error occurred. For instance, Sys: The top line of the trace says that the warning was issued by method idleCPUwarning of class IflThreadCB, which was called by kill , the system wrapper for the do kill method, which is part of a student project refer back to Section 1.

The trace further says that the method IflThreadCB. In particular, just after thread Thread Thus, the cause of the warning is most probably the failure of the student implementation to call the dispatch method at the end of do kill. Unfortunately, the obfuscation that OSP 2 employs to prevent inappropriate calls to certain methods diminishes the value of execution stack traces, because the names of some method calls listed in a trace might be unintelligible.

However, even with name obfuscation, the trace often contains enough information to be useful. Here is an example of an obfuscated trace: In the last line, the real name of the method osp. The following instructions apply if your instructor chooses to use the automatic project submission system of OSP 2. First, you will have to supply your email address to the instructor, who will prepare an account for you. You must use the same address in all your interactions with the submission system.

The submission system provides three functions, which are available as links from the project submission page. The URL of this page will be supplied to you by your instructor. The functions are as follows: Change of password. Clicking on this link will let you change your password.

Your initial password will be mailed to you when the instructor sets up your account. This happens when the browser tries to use your old password.

Password reminder. Organization of OSP 2 36 If you forget your password or if you did not receive the initial password for some reason, you should click on this link.

First, you will get email with a link to a servlet. If you do not click on the aforesaid servlet link, your password will not be changed. It should be noted that the password-reminder function can be used only within intervals of at least four hours.

Project submission. The system then copies the sources over to the server and compiles them. Next, you will have to run this applet by clicking on appropriate buttons. If you are happy with the results, click on the submission button.

The simulation run will then be sent to the server again, so that the instructor can check it for errors. In some cases e. Finally, some browsers might issue a security exception when you try to run the submission applet. You will see this exception in the Java console of the browser we recommend that you always run the submission applet with the Java console open.

You are new to OSP 2. What do you do? Our example will focus on thread management, in particular, the resumption of a thread from a waiting state. This activity is the responsibility of the method do resume , one of the methods you are to implement as part of your implementation of the class ThreadCB. Underlying all of this is the notion of a thread state, which can be one of ThreadReady, ThreadWaiting, ThreadKill, etc.

When a thread completes the execution of the pagefault handler or blocking system call, it should be moved up to the next highest waiting level by decrementing its waiting status; in the case of level 0 ThreadWaiting, it should transit to the ThreadReady state.

Its code is given in Figure 2. For example, the statement MyOut.

ThreadCB] Resuming Thread 0: Do resume is one of the simplest methods in OSP 2. Assuming that you have completed your design and coding of the Threads project, let us proceed in a step-by-step fashion with the example session.

Only a thread with status ThreadWaiting or higher can be resumed. The status must be set to ThreadReady or decremented, respectively. A ready thread should be placed on the ready queue. If you think this could be due to stale. An Example Session with OSP 2 type make clean and then make to force recompilation of the entire project.

PageTableEntry] Unlocking Page New lock count: ThreadCB] Entering resume Thread 0: ThreadCB] Leaving resume Thread 0: In this case, this means the thread moves from waiting-level 2 to waiting-level 1. It is a good idea to have a look at these too, both to see how well your implementation is performing and to simply get a better understanding of how threads behave in OSP 2.

CPU Utilization: Introducing an Error into do resume Unfortunately, not all of your runs of OSP 2 will be as successful as the one above: Let us consider what happens when the latter occurs. In particular, suppose that in do resume , instead of typing: What are the consequences of this typo?

Thread status is ThreadWaiting3; should be ThreadWaiting1 at osp. What follows the error message is a dump of the system-call stack which indicates the sequence of method calls that led to the problem.

In an actual debugging situation, you would use this information to isolate and repair the problem in your implementation of the do resume method.

Processes 3. The former is captured through the concept of a thread, which represents a running program, and the latter is captured using the concept of a task. Threads are the schedulable and dispatchable units of execution in OSP 2. We will have more to say about threads in the next chapter. There is also a system-wide notion of the current task, which is the task that owns the currently running thread.

This thread is known as the current thread of the task. In the rest of this chapter we describe TaskCB, the only class in the Tasks package. The class diagram of Figure 3.

Operating Systems

In OSP 2 , creation of a task involves the creation of a task object, allocation of resources to the task, and various initializations. The task object is created using the default task constructor TaskCB. First, a page table must be created using the PageTable constructor, and associated with the task using the method setPageTable. Second, a task must keep track of its threads objects of type ThreadCB , communication ports objects of type PortCB , 3.

Lists or variable-size arrays are good candidates. Next, the task-creation time should be set equal to the current 48 3. Processes simulation time available through the class HClock , the status should be set to TaskLive, and the task priority should be set to some integer value. In OSP 2 this number is determined by the number of bits needed to specify an address in the virtual address space of a task, and is obtained using the following method: An open operation can fail due to lack of space on the swap device.

In this case the do create method of TaskCB should dispatch a new thread and return null. Finally, the TaskCB object created and initialized by your do create method should be returned. First, it should iterate through the list of all live threads of the task and kill them. Recall that maintenance of this list is entirely the responsibility of your implementation.

Each time a thread is killed, the do removeThread method is called by the Threads package. The do kill method should then iterate over the ports attached to the task and destroy them as well. Each request to destroy a port will eventually result in a call to your do removePort method.

The status 1 There is no need to invoke the dispatch method of ThreadCB in order to schedule a thread to run after the do create system call is complete. Since a new thread is created as part of the process of task creation, dispatch will be called by the create method of ThreadCB.

However, calling dispatch before leaving do create is harmless. The latter is accomplished by invoking the method deallocateMemory of class PageTable on the page table of the task.

You should keep in mind that each call to close eventually results in a call to your method do removeFile. However, this might not happen immediately. This means, of course, that calls to your method do removeFile might be similarly delayed. The purpose of these calls is to notify TaskCB of the creation of a new thread so that the inventory of threads owned by the task can be properly updated.

Processes removed from the list of threads owned by the task. It enables TaskCB to maintain the inventory of ports that belong to the task. The method should remove the port from the list of ports maintained by TaskCB.

The implementation of the table is entirely up to the student. This method is typically called by the method open of class OpenFile indirectly, through the wrapper addFile. This method is typically called by the method close of class OpenFile.

PortCB 3. Called when a task is terminated. Used to create a page table object for a newly created task. This object must then be associated with the task using the setPageTable method. A create operation can fail if, for example, the device does not have enough space. See the description of class FileSys for more details about this method. Returns the created thread. Notice that this method calls your implementation of do removeThread to disassociate the thread from the task.

These attributes and methods are provided by the class IflTaskCB and are inherited. The methods appearing in the table are more fully described in Section 3. The identity of a task is set by the system, but it can be queried with the method getID.

Page table: The page table of a task is set with the method setPageTable and can be retrieved using getPageTable. The status of a task is handled using the methods setStatus and getStatus. The status of a task is handled using the methods setPriority and getPriority. Current thread: Indicates which thread of a task is currently running. The methods to query and modify this attribute are getCurrentThread and setCurrentThread.

Creation time: The creation time of a task is handled using the methods getCreationTime and setCreationTime. Two methods are used in conjunction with this table: The do -versions of the addFile and removeFile methods are part of the Tasks project.

Note that TaskCB never calls these methods—it implements them. Keeps track of all of the communication ports owned by a task. The do -versions of these methods are part of the Tasks project. TaskCB implements these methods—it never calls them. Table of live threads: As with ports, OSP 2 does not prescribe how this table is to be implemented. The do -versions of these methods are implemented by the student.

These methods are implemented by TaskCB— they are never called by this class. These methods can be used in the implementation of this or other student packages. To the right of each method we list the class of the objects to which the method applies. In general, the public methods exported by a student package may belong to more than one class; see, for example, package Memory Section 5.

Allowed values are TaskLive, for live tasks, and TaskTerm, for terminated tasks. TaskCB 54 3. The current thread is the thread that will run when the task is made current by the dispatcher. TaskCB 3. Management and Scheduling of Threads 4.

The objective of the Threads project is to teach students about thread management and scheduling in a modern-day operating system and to provide them with a well-structured programming environment in which to implement thread-management and scheduling techniques.

To this end, students will be asked to implement the two public classes of the Threads package: The former implements the most common operations on a thread, while the latter is a timer interrupt handler that can be used to implement time-quantum-based scheduling algorithms for threads.

Operating System Design and Implementation

We begin this chapter with an overview of thread basics. There are at least four reasons why it is desirable to structure applications as multi-threaded ones: Parallel Processing: A multi-threaded application can process one batch of 58 4. Management and Scheduling of Threads data while another is being input from a device.

On a multiprocessor architecture, threads may be able to execute in parallel, leading to more work getting done in less time. Program Structuring: Threads represent a modular means of structuring an application that needs to perform multiple, independent activities. Interactive Applications: In an interactive application, one thread can be used to carry out the current command while, at the same time, another thread prompts the user for the next command. Asynchronous Activity: A thread can be created whose sole job is to schedule itself to perform periodic backups in support of the main thread of control in a given application.

Threads can execute concurrently. Thus, for example, a server process can service a number of clients concurrently: We thus see that there is considerable incentive from an application programming perspective for an OS to support multi-threading. Threads as Independent Entities. That is, a task is a container for one or more threads and each of these threads has shared access to the resources owned by the task.

There is, however, certain information associated with a thread that allows it to execute as a more or less independent entity: This context includes the contents of the machine registers when it was last running; in particular, every thread has its own, independent program counter. All the threads of a given task reside in the same address space and have access to the same data. Scheduling Algorithms for Threads. As previously noted, threads are the schedulable units of execution in OSP 2 and any other OS that supports threads.

This represents a shift from older operating systems like traditional Unix in which processes played this role. In this way, the CPU is kept busy most of the time, thereby increasing its utilization.

So what are the kinds of events that threads may block on? It should be noted, however, that an OS can decide to perform a context switch any time it is convenient, again for the purpose of improving system performance. Convenient in this case means any time control resides within the OS, and include occasions such as timer interrupts and system call invocations. The question you must now ask yourself is which thread should the OS schedule next when a context switch is to take place? CPU utilization: Response time: Typically one is interested in the average response time over all commands.

Turnaround time: The amount of time needed to process a given task. Includes actual execution time plus time spent waiting for resources, including the CPU. Management and Scheduling of Threads The answer to the question as to which thread to schedule next lies in the CPU scheduling algorithm the OS implements. Emphasis on response time vs. CPU utilization.

Algorithms of the former kind can be thought of as user-oriented and those of the latter kind as systemoriented. Preemptive vs. A preemptive algorithm may interrupt a thread and move it to the ready-to-run queue, while in the nonpreemptive case, a thread continues to execute until it terminates or blocks on some event.

Fair vs. In the absence of fairness, starvation is possible and the algorithm is said to be unfair in this case. Choice of selection function. The selection function determines which thread, among the ready-to-run threads, is selected next for execution. The choice can be based on priority, resource requirements, or execution characteristics of the thread such as the amount of elapsed time since the thread last got to execute on the CPU.

In describing these algorithms, we assume the existence of a ready queue where ready-to-run threads lie in wait for the CPU. Round Robin. Like FCFS but each thread gets to execute for a length of time known as the time slice or time quantum before it is preempted and placed back on the ready queue. This is a nonpreemptive policy in which the thread with the shortest expected processing time is selected next. The scheduler must have an estimate of processing time to perform the selection function.

This is a preemptive version of STN in which the thread with the shortest expected remaining processing time is selected next. Favors short threads but also gives priority to aging threads with high values for w. Threads enter the system at the top-level queue.

If a thread gains control of the CPU and exhausts its time quantum, it is demoted to the next lower queue.

The lowest queue implements pure round robin. The selection function chooses the thread at the head of the highest non-empty queue. Thus this algorithm penalizes long-running threads since each time they use up their time quantum, they are demoted to the next lower queue. Priority-Driven Preemptive Scheduling. The basic idea of this scheme is that when a thread becomes ready to execute whose priority is higher than the currently executing thread, the lower-priority thread is preempted and the processor is given to the higher-priority thread.

The rest of this chapter describes each class in the package Threads in detail. These classes are placed in a larger context in the class diagram given in Figure 4. Management and Scheduling of Threads Figure 4. Before discussing the required functionality of the methods in ThreadCB we need to look deeper into the nature of OSP 2 threads.

State transitions. Thread management is concerned with two main issues: Figure 4. From then on, two things can happen: A thread can be dispatched only if it has the status ThreadReady, but a live thread i. One sad thing about OSP 2 threads is that they never die of natural causes: In other words, there is no separate system call to 64 4. Management and Scheduling of Threads terminate a thread normally and there is no special state to denote normal thread termination.

A running thread can be preempted and placed back into the ready queue or it can be suspended to the waiting state. OSP 2 does not place any restrictions on the way the ready queue is implemented, so you should use your own design. In this case, some designs might be much better than others. An OSP 2 thread can be at several levels of waiting. When a running thread enters the pagefault handler or when it executes a blocking system call e.

A thread is not always blocked when it enters a waiting state. In other words, the thread switches hats: A system thread might do some work needed to process the request and then it might execute another system call. If the second system call is blocking e. To illustrate this process, consider processing of a pagefault Chapter 5. When a pagefault occurs, the thread enters the level 0 waiting state, executes a page replacement algorithm and then makes a system call to write.

Next, while still in the pagefault handler, the thread would execute the read system call and go into the waiting state at levels 1 and 2, similar to the write call. At this point, the pagefault handler performs some record-keeping operations see Chapter 5 , executes a 4.

This causes the thread to change its status from ThreadWaiting to ThreadReady. In sum, an OSP 2 thread can be suspended to several levels of depth by executing a sequence of nested suspend operations. When all the corresponding events happen, the resume method is called on the thread, which decreases the wait level by 1. When all the events on which the thread is suspended occur, the thread goes back into the ThreadReady state. Context switching.

Passing control of the CPU from one thread to another is called context switching. This has two distinct phases: Preempting a thread involves the following steps: Changing of the state of the currently running thread from ThreadRunning to whatever is appropriate in the particular case. If the thread has used up its time quantum, then the new status should become ThreadReady.

Changing the status is done using the method setStatus described later. This step requires knowing the currently running thread. The call MMU. The task itself can be obtained by applying the method getTask to this page table. The currently running thread is then determined using the method getCurrentThread. Setting the page table base register PTBR to null. PTBR is a register of the memory management unit a piece of hardware that controls memory access , or MMU, which always points to the page table of the running thread.

This is how MMU knows which page table to use for address translation. Changing the current thread of the previously running task to null. The current thread of a task can be set using the method setCurrentThread. When a thread, t, is selected to run, it must be given control of the CPU. This is called dispatching a thread and involves a sequence of steps similar to the steps for preempting threads: The status of t is changed from ThreadReady to ThreadRunning. PTBR is set to point to the page table of the task that owns t.

Management and Scheduling of Threads 3. The current thread of the above task must be set to t using the method setCurrentThread. Likewise, if no process is running and the dispatcher chooses some ready-to-run thread for execution, you can view it as a context switch from the null thread to t. Before going on you must revisit Section 1.

The state transition diagram shows that to a large extent thread management is driven by two operations: The suspend operation places a thread into a waiting queue of the event passed as an argument and increases the wait level and the resume operation decreases the wait level and, if appropriate, places it into the queue of ready-to-run threads in which all threads are in the ThreadReady state. All this is accomplished using the Event class discussed in Section 1. After all the relevant events have occurred, the thread is free to execute again and is placed on the ready queue.

Methods of class ThreadCB. These are the methods that have to be implemented as part of the project.

We discuss these methods as part of the required functionality and then give a summary of these methods in a separate section. You can use it to set up static variables that are used in your implementation, if necessary. If this number of threads per task is exceeded, no new thread should be created for that task, and null should be returned. The actual value of the priority depends on the particular scheduling policy used.

OSP 2 provides methods for setting and querying the priority of both tasks and threads. Finally, the status of the new thread should be set to ThreadReady and it should be placed in the ready queue. If all is well, the thread object created by this method should be returned. It is important to keep in mind that each time control is transferred to the operating system, it is seen as an opportunity to schedule a thread to run.

Therefore, regardless of whether the new thread was created successfully, the dispatcher must be called or else a warning will be issued. To destroy a thread, its status must be set to ThreadKill and a number of other actions must be performed depending on the current status of the thread. The status of a thread can be obtained via the method getStatus.

If the thread is ready, then it must be removed from the ready queue. If a running thread is being destroyed, then it must be removed from controlling the CPU, as described earlier. There is nothing special to do if the killed thread has status ThreadWaiting at any level. However, you are not done yet.

What should now happen to the IORB?

Should you just let the 68 4. Management and Scheduling of Threads device work on a request that came from a dead thread? This can be done by scanning all devices in the device table and executing the method cancelPendingIO on each device. The device table is an array of size Device. During the run, threads may acquire and release shared resources that are needed for the execution. Therefore, when a thread is killed, those resources must be released into the common pool so that other threads could use them.

This is done using the static method giveupResources of class ResourceCB, which accepts the thread be killed as a parameter. Two things remain to be done now. First, you must dispatch a new thread, since you should use every interrupt or a system call as an opportunity to optimize CPU usage.

Second, since you have just killed a thread, you must check if the corresponding task still has any threads left.

A task with no threads is considered dead and must be destroyed with the kill method of class TaskCB. As can be seen from Figure 4. If the thread is running, then it is suspended to ThreadWaiting. If it is already waiting, then the status is incremented by 1. You now must set the new thread status using the method setStatus and place it on the waiting queue to the event. If suspend is called to suspend the running thread, then the thread must lose control of the CPU.

Switching control of the CPU can also be done in the dispatcher as part of the context switch , but it has to be done somewhere to avert an error.

Finally, a new thread must be dispatched using a call to dispatch. If the thread becomes ready, it should be placed on the ready queue for future scheduling. Finally, a new thread should be dispatched. A typical sequence of actions that leads to a call to resume is as follows: When an event happens, the method notifyThreads is invoked on the appropriate Event object. This method examines the waiting queue of the event, removes the threads blocked on this event one by one, and calls resume on each such thread.

So, by the time do resume is called, the corresponding thread is no longer on the waiting queue of the event. Scheduling can be as simple as plain round-robin or as complex as multi-queue scheduling with feedback.

OSP 2 does not impose any restrictions on how scheduling is to be done, provided that the following conventions are followed. First, some thread should be chosen from the ready queue or the currently running thread can be allowed to continue. Apart from the methods of the Event class listed above, the following methods of other classes should or can be used to implement the methods in class ThreadCB as described above: In conjunction with getTableSize , this method can be used in a loop to examine each device in turn.

Note that all devices are mounted by OSP 2 at the beginning of the simulation and no devices are added or removed during a simulation run. However, this method does not cancel the IORB that is currently being serviced by the device. PTBR holds a reference to the page table of the currently running task. When no thread is running, the value should be null; otherwise, it must be the page table of the task that owns the currently running thread. TaskCB 4. See Section 1.

These attributes and methods are provided by the class IflThreadCB and are inherited. The methods appearing in the table are more fully described in Section 4. The task that owns the thread. This property can be set and queried via the methods setTask and getTask. The identity of a thread can be obtained using the method getID. This property is set by the system.

The status of the thread. The relevant methods are setStatus and getStatus. The priority of the thread. The value of this property can be obtained using the method getCreationTime. CPU time used: The timer interrupt handler is the simplest of all interrupt handlers in OSP 2. Its main purpose is to schedule the next thread to run and, possibly, to set the timer to cause an interrupt again after a certain time interval.

Resetting the times can also be done in the dispatch method of ThreadCB instead, because the dispatcher might want to have full control over CPU time slices allocated to threads. The following is a list of methods that belong to other classes and might be useful for implementing do handleInterrupt: Cancels the previously set timer, if any.

The setPriority and getPriority methods are provided for convenience, in case the assignment calls for priority scheduling. OSP 2 does not actually care how priority is used, if at all. Virtual Memory Management 5. The main class, MMU, represents the memorymanagement unit, the piece of hardware that is responsible for memory access in a computer.

All of these classes are described in detail later on in the chapter, each in its own subsection. We begin with an overview of memory-management basics. MMU is responsible for providing access to main memory. In OSP 2 , memory access is simulated by calling the method refer of the class MMU, which is one of the key methods to be implemented in this project.

However, the 76 5. Virtual Memory Management MMU is the gateway to memory for executing threads, and it provides you with a golden opportunity to implement the memory-management technique your Memory assignment calls for.

Memory management and multiprogramming. Modern memorymanagement techniques are aimed at supporting multiprogramming and must therefore allow multiple processes or threads to be memory-resident simultaneously.

In this way, when the currently executed thread becomes blocked, e. Partitioning memory. The former results in internal fragmentation, which occurs when a process does not utilize the entirety of a partition. The latter results in external fragmentation, which occurs when a partition is too small to be of use to any process. Segmentation is an alternative to paging that uses variable-size partitions. Logical memory. The big advance in memory management came with the realization that the memory allocated to a process need not be contiguous!

Thus, in theory, a page of a process can be placed in any available page frame. The primary mechanisms used for implementing logical memory are the page table base register abbr. PTBR and the page table. The key issue here is logical address translation, how to convert a logical address into a physical address, and this is the responsibility of the MMU.

A logical address is just string of bits e. Every process has a page table of its own and when a thread is dispatched on the CPU, the address of the page table of the process to which the thread belongs is placed in the PTBR.

The overall schema is depicted in Figure 5. Page number 3 Offset Page table base register Page table 0 6 1?

Item Preview

Virtual memory. The simple memory-addressing mechanism just described works well as long as the frames corresponding to the pages of a process are all in main memory. However, as seen in Figure 5. The key insight behind virtual memory is that a page table can have more entries than the number of physical page frames, so a one-to-one assignment of frames to pages might not be possible. In other words, the size of virtual memory can and normally does exceed that of physical memory.

Note that we use the term virtual memory now instead of logical memory to emphasize the fact that larger-thanphysical-memory address spaces are supported by this scheme.

Virtual Memory Management Pagefaults. The key mechanism for implementing virtual memory is that each page table entry has a validity bit, which indicates whether the page has a main-memory frame assigned to it. This bit is checked by the MMU hardware and whenever a running thread makes a reference to a page whose validity bit is zero, a pagefault occurs: The intended response from the OS is to assign a suitable frame to the page.

The module responsible for this action is called the pagefault handler. A page whose validity bit is one i. If no frame is assigned to a page, where is the program code or data that the running thread is supposedly referencing? The answer is that a copy of the entire process space is kept in secondary storage on a swap device. In high-performance systems, a swap device can be a separate disk, but typically it is just a partition occupying part of a physical disk.

Nevertheless, the operating system assigns a logical device to each such partition and at that level the swap device can be viewed as a separate device with its own characteristics and device number. Thus, every process i. When a pagefault on page P of task T occurs, the pagefault handler has to do several things: This is done by creating a new event, pfEvent, of type SystemEvent and then executing suspend on the thread using pfEvent as a parameter.

A new system event is created using the constructor SystemEvent of class SystemEvent. This event must be kept around until the end of pagefault processing, as it is needed to resume the thread before returning from the pagefault handler. Find a suitable frame to assign to page P.

An obvious choice would be a free frame, i. But there might not be such a frame at the moment remember that there are fewer frames than pages! In this case, page replacement must be performed, as described below. The result of a successful page-replacement action is that a free frame becomes available and is assigned to page P.

Perform a swap-in. Once a frame is assigned to the faulty page, you need to make sure that it 5. To do this, the pagefault handler must initiate a swap-in: Suspend the pagefault handler. Finish up. Once the image of the right page is copied into the frame, the pagefault handler should update the page table to make sure that the page entry is pointing to the right frame, and set the validity bit of the page appropriately.

Next, the thread that caused the pagefault should be resumed and placed on the queue of the ready-to-run threads. This is done by executing the method notifyThreads on the event pfEvent, which was created in Step 1. Finally, as with any other interrupt handler, the dispatcher should be called to give control of the CPU to some ready-to-run thread. Page replacement. In describing the actions of the pagefault handler, we deliberately omitted a saga of its own: The algorithm deployed by the pagefault handler for choosing such a frame is called the page-replacement algorithm, and the most well-known algorithm of this kind is LRU Least Recently Used.

LRU replaces the page in memory that has not been referenced for the longest time. Assuming that threads exhibit the principle of locality, meaning that they cluster memory references around a certain subset of their pages over a given window of time, then the LRU page should be the least likely page to be referenced in the near future and its replacement is a good bet.

Many of these algorithms 1 Handling disk interrupts is part of another project, module Devices. Consider for simplicity the case of a single use bit added to each frame. From a performance perspective, a good page-replacement algorithm is characterized by a low pagefault rate.

Locking and unlocking of frames. There are a variety of ways to maintain such a count. Here is an explanation of how it is done in OSP 2.

An IORB does not refer to frames directly. If no frame is assigned to the page, a pagefault occurs, and the IORB will not be enqueued on the device until the pagefault processing is over. The lock operation increments the lock count of the frame associated with the page and the unlock operation decrements it. A page is considered to be locked in a frame if the lock count of the associated frame is a positive number.

Thus, by the time the IORB makes it to the device queue, the page involved 5. The page-replacement mechanism is prohibited from taking frames that have positive lock counts.

The reason should be obvious: If not, it would have to be swapped in. So, the selected IORB cannot be processed and the device would remain idle. If the page being locked is frame-less, a pagefault occurs and the page is brought in before the IORB is selected for processing. Dirty frames. Locking is not the only constraint that a page-replacement mechanism must abide by. Another issue has to do with so-called dirty frames. A dirty frame is one whose contents has been changed since the last time a page was swapped into the frame.

Otherwise, all changes made to the page will be lost.In fact, an OSP 2 frame table contains more information than that.

The kernel examines the system call number and branches to the correct system call handler, usually via a table of pointers to system call handlers indexed on the system call number. The creation time of a task is handled using the methods getCreationTime and setCreationTime. If a running thread is being destroyed, then it must be removed from controlling the CPU, as described earlier. Allowed values are TaskLive, for live tasks, and TaskTerm, for terminated tasks.

KALEIGH from Irvine
Also read my other posts. One of my extra-curricular activities is gateball. I do fancy reading comics majestically .
>