The Way We WereBefore Eclipse 3.0, operations on any resources would lock the entire workspace. At the tail end of this operation, the delta phase occurred and any interested parties could respond to the changes made, including builders, which were given an opportunity to perform incremental builds in response to the changes made by the operation. The advantage of this approach was its simplicity. Clients could write operations and resource delta listeners without worrying about concurrency. The disadvantage of the pre-3.0 approach was that the user had to wait until an operation completed before the UI became responsive again. The UI still provided the user the ability to cancel the currently running operation but no other work could be done until the operation completed. Some operations were performed in the background (resource decoration and JDT file indexing are two such examples) but these operations were restricted in the sense that they could not modify the workspace. If a background operation did try to modify the workspace, the UI thread would be blocked if the user explicitly performed an operation that modified the workspace and, even worse, the user would not be able to cancel the operation. A further complication with concurrency was that the interaction between the independent locking mechanisms of different plug-ins often resulted in deadlock situations. Because of the independent nature of the locks, there was no way for Eclipse to recover from the deadlock, which forced users to kill the application. The Brave New WorldThe functionality provided by the workspace locking mechanism can be broken down into the following three aspects:
With the introduction of the Jobs API, these areas have been divided into separate mechanisms and a few additional facilities have been added. The following list summarizes the facilities added.
The rest of this article provides examples of how to use the above-mentioned facilities. JobsThe Job class, provided in the org.eclipse.core.runtime plug-in, allows clients to easily execute code in a separate thread. This section introduces the Example 1The following code snippet shows a simple example of how to create and run a job. Job job = new Job("My First Job") { protected IStatus run(IProgressMonitor monitor) { System.out.println("Hello World (from a background job)"); return Status.OK_STATUS; } }; job.setPriority(Job.SHORT); job.schedule(); // start as soon as possible In line of the above example, we create an anonymous subclass of Job, providing the job with a name. All subclasses of Example 2At first glance, the previous example is not significantly different from using the final Job job = new Job("Long Running Job") { protected IStatus run(IProgressMonitor monitor) { try { while(hasMoreWorkToDo()) { // do some work // ... if (monitor.isCanceled()) return Status.CANCEL_STATUS; } return Status.OK_STATUS; } finally { schedule(60000); // start again in an hour } } }; job.addJobChangeListener(new JobChangeAdapter() { public void done(IJobChangeEvent event) { if (event.getResult().isOK()) The above job will run until there is no more work to do or until the job is canceled. Line shows how a job checks whether a cancel has occurred. The job code is in total control of whether the job is canceled and when, thus ensuring that job cancelation does not result in an invalid state. A job is canceled by invoking the job.cancel(); The effect of calling An There are three categories of jobs: systems, user and default. The distinction is that system jobs, by default, do not appear in the Progress view (unless the view is in verbose mode) and do not animate the status line. The job in the above example has been marked as a system job (line ). User jobs and default jobs will show UI affordances when running. In addition, a user job will show a progress dialog to the user with the option to be run in the background. More on this will be presented later. Example 3The following example illustrates how to use job families to control the execution of a set of jobs. public class FamilyMember extends Job { private String lastName; public FamilyMember(String firstName, String lastName) { super(firstName + " " + lastName); this.lastName = lastName; } protected IStatus run(IProgressMonitor monitor) { // Take care of family business return Status.OK_STATUS; } public boolean belongsTo(Object family) { return lastName.equals(family); } } In the above class, each job has a first and last name () and all jobs that have the same last name are considered to be in the same family (). The Eclipse platform provides a job manager that applies several job operations to an entire family. These operations include // Create some family members and schedule them new FamilyMember("Bridget", "Jones).schedule(); new FamilyMember("Tom", "Jones").schedule(); new FamilyMember("Indiana", "Jones").schedule(); // Obtain the Platform job manager IJobManager manager = Platform.getJobManager(); // put the family to sleep manager.sleep("Jones"); // put the family to sleep for good! manager.cancel("Jones"); This section has introduced the basic Jobs API. The next section will look at how to prevent jobs that operate on the same resources from interfering with each other. Job Scheduling RulesAn important aspect of managing concurrently running jobs is providing a means to ensure that multiple threads can safely access shared resources. This is typically done by providing a means for a job to acquire a lock on a particular resource while it is accessing it. Locking gives rise to the possibility of deadlock, which is a situation where multiple threads are contending for multiple resources in such a way that none of the threads can complete their work because of locks held by other threads. The following diagram illustrates the simplest form of deadlock. In this scenario, we assume that both threads need to hold both locks to do some work and will release them when the work is done. However, they both obtain the locks in a different order which can lead to deadlock. Thread 1 obtains lock A while, at approximately the same time, thread 2 obtains lock B. Before either lock is released, both threads try to obtain the other's lock. This results in both threads being blocked indefinitely since neither can continue until the other releases the held lock but neither will release a lock until the second lock is obtained. The ISchedulingRule myRule = ... job.setSchedulingRule(myRule); In order to avoid resource contention and deadlock, there are two constraints associated with scheduling rules:
The implementation of the public interface ISchedulingRule { public boolean isConflicting(ISchedulingRule rule); public boolean contains(ISchedulingRule rule); } The first constraint is fairly self explanatory and is provided through the implementation of This API does not make deadlock impossible. However, when deadlock occurs between the locking mechanisms provided by this API, a built-in deadlock detection facility will at least allow execution of the threads involved to continue (more on this later). Of course, deadlocks that involve other locking mechanisms will not be detected or resolved. Example 1: IResource and ISchedulingRuleThe org.eclipse.core.resources plug-in implements the final IProject project = ResourcesPlugin.getWorkspace().getRoot().getProject("MyProject"); Job job = new Job("Make Files") { public IStatus run(IProgressMonitor monitor) { try { monitor.beginTask("Create some files", 100); for (int i=0; i<10; i++) { project.getFile("file" + i).create( new ByteArrayInputStream(("This is file " + i).getBytes()), false /* force */, new SubProgressMonitor(monitor, 10)); if (monitor.isCanceled()) return Status.CANCEL_STATUS; } } catch(CoreException e) { return e.getStatus(); } finally { monitor.done(); } return Status.OK_STATUS; } }; job.setRule(ResourcesPlugin.getWorkspace().getRoot()); job.schedule(); The above code reserves exclusive write access to the resources contained in the workspace by associating a scheduling rule with the job (line) . The job will not be run while other threads hold a conflicting rule. The Given this relationship, our job will not run if a scheduling rule is held by another thread for the workspace root itself or for any of the resources contained in the workspace. Once this job is running, no other threads will be able to obtain a rule for the above-mentioned resources until the job in our example completes. The problem with our previous example is that it locks the entire workspace while only touching files in a single project. This means that no other code, job or otherwise, that modifies any resource in the workspace can run concurrently with our job. To correct this, we could either provide a more specific scheduling rule (i.e. lock only what we need) or not provide a scheduling rule for the job at all. We can get away with the latter approach because the public void create(InputStream in, boolean force, IProgressMonitor pm) { IResourceRuleFactory ruleFactory = ResourcesPlugin.getWorkspace().getRuleFactory(); ISchedulingRule rule = ruleFactory.createRule(this); try { Platform.getJobManager().beginRule(rule, pm); // create the file } finally { Platform.getJobManager().endRule(rule); } } Notice that the determination of the scheduling rule is delegated to a rule factory In the Once we have the rule we need, we can use the When using the It is worth reiterating at this point that the thread that is calling the Example 2: Using MultiRuleOne of the restrictions of scheduling rules is that, if a running job holds a scheduling rule, it can only try to obtain new rules that are contained by the held rule. In other words, the outer-most scheduling rule obtained by a job must encompass all nested rules. In the previous example, we obtained the workspace root resource which encompasses all resources in the workspace. This works but, as we said before, it prevents any other jobs from modifying resources while our job is running. The implication of the implementation of the
Here is a method that defines a multi-rule that can be used to create multiple files. public ISchedulingRule createRule(IFile[] files) { ISchedulingRule combinedRule = null; IResourceRuleFactory ruleFactory = ResourcesPlugin.getWorkspace().getRuleFactory(); for (int i = 0; i < files.length; i++) { ISchedulingRule rule = ruleFactory.createRule(files[i]); combinedRule = MultiRule.combine(rule, combinedRule); } return combinedRule; } For each file, we obtain the creation rule from the rule factory (). We combine the rules using a static helper on the job.setRule(createRule(files)); The job now will not run until the files are available and, once running, will only block other jobs that try to obtain rules on those specific files. Although recommended whenever possible, it is not required that a job pre-define its scheduling rule. As stated previously, scheduling rules can be obtained within a running job using Example 3: Read-access to ResourcesRead access to resources does not require a scheduling rule. One implication of this is that information about resources, including the contents of files, can be accessed without blocking other threads that are modifying those resources. Another implication is that, when accessing resources in the workspace in a read-only fashion without holding a scheduling rule, the client must be aware that pre-checks cannot be used to guarantee the state of a resource at any future point. The following example illustrates this. IFile file = // some file In line , we check whether a file exists before accessing its contents. However, the file could be deleted by another thread after line is executed but before line is. This will result in an exception. To make our code thread-safe, we add the existence check in line that allows us to verify, after the fact, that the file we cared about has been deleted. We can then do whatever we would have done for the file if the existence check in line had failed. Although the resource rule factory does have a method for obtaining a marker creation rule, creating a marker does not currently require a scheduling rule. This allows marker creation to be done in the background without affecting other threads that are modifying resources. Clients should still obtain the rule from the rule factory, because it is possible that marker creation could require a rule in the future. There is a bit of a caveat when reading and writing files concurrently, at least on Linux and Windows. If one thread writes to a file as another is reading, it is possible that the reading thread could start reading the new contents. This is not very likely to happen unless a thread blocks while reading a file since the reading thread will most likely use buffering and will be able to keep ahead of the writing thread. However, if your application must ensure that the contents read from a file are consistent, then some mechanism, be it the use of scheduling rules or some other mechanism, should be used. On windows, the deletion of a file will fail if another thread is reading the contents of the file. Again, this can be handled by using scheduling rules when reading but can also be handled by catching the deletion failure and notifying the user. The latter approach is what is used by the Windows file explorer. Resource Change Notification BatchingThe old
The following table provides the API class or method to use depending on whether the user requires delta batching or concurrency.
If delta batching is desired and the work is to be done in the same thread, then public void run( IWorkspaceRunnable, IProgressMonitor); public void run( IWorkspaceRunnable, ISchedulingRule, int, IProgressMonitor); Notice that the second method ( which is new in 3.0) takes an additional In most cases, the firing of post-change deltas from within a workspace operation is not a problem. However, it will affect those who depended on these happening only at the end of an operation. For these cases, the new run method also takes an Example 1: Using the old IWorkspace run methodThe following example modifies the contents of the selected files. final IFile[] files = getSelectedFiles(); ResourcesPlugin.getWorkspace().run(new IWorkspaceRunnable() { In earlier versions of Eclipse, this code, although not perfect, was acceptable; there are now problems with this code, however.
Example 2: Using the new IWorkspace run methodNow we are going to convert the above example to use the new public ISchedulingRule modifyRule(IFile[] files) { This code is the same as the final IFile[] files = getSelectedFiles(); ResourcesPlugin.getWorkspace().run(new IWorkspaceRunnable() { Example 3: Providing Progress and CancellationIn the previous example, we illustrated how to use the new // Create a runnable that can be passed to the progress service The above code creates a The advantage of using the progress service is that it will show a progress dialog that will give the user feedback about jobs that may be blocking this one and allow the user to cancel if the operation is taking too long. There are, however, factors we must be aware of with this code.
This second point is addressed by the next example. Example 4: Using a WorkspaceJobAnother way to batch resource change notifications is to use a final IFile[] files = getSelectedFiles(); WorkspaceJob job = new WorkspaceJob("Modify some files") { public IStatus runInWorkspace(IProgressMonitor monitor) throws CoreException { for (int i = 0; i < files.length; i++) { The behavior of this job is similar to that of the Providing Feedback about JobsAlthough it is a good thing that jobs can be run in the background, it can be confusing when jobs that are launched as a direct result of user action just run in the background. When this happens, the user is not sure whether something is happening or if the action failed. One way of dealing with this is by showing progress in the Workbench progress area in the lower right corner of the Workbench window. Progress is shown in the progress area for any job that is a non-system job. By default, jobs are non-system jobs so this feedback will happen unless the job is explicitly marked as a system job (using Example 1: User JobsShowing progress in the progress area can still be too subtle for most users. In many cases, the user would like stronger feedback about the job's start and completion. The former indication can be provided by tagging the job as a user job. Job job = new Job("Use initiated job") { In line , the job has been identified as a user job. What this means is that the user will be shown a progress dialog but will be given the option to run the job in the background by clicking a button in the dialog. This was done to keep the user experience close to what it was pre-3.0 but still allow the user to benefit from background tasks. There is a Workbench option, "Always run in background", that can be enabled if a user does not want to see the progress dialog. Example 2: User Feedback for Finished JobsIf the user does not choose to run the job in the background, then they will know when the job has completed because the progress dialog will close. However, if they choose to run the job in the background (by using the dialog button or the preference), they will not know when the job has completed. Furthermore, a running job may accumulate information that should be displayed to the user when the job completes. This can be shown to the user immediately if the job is modal (i.e., the job was not run in the background). However, if the job was run in the background, the information should not be displayed immediately because it may interrupt what the user is currently doing. In these cases, an indication is placed on the far-right side of the Workbench progress area, which indicates that the job is done and has results for the user to view. Clicking on the indicator will display the result. Such a job will also leave an entry in the progress view, which can be opened by double-clicking in the progress area. Clicking on the link in the progress view will also open the result. Now let's have a look at how we can configure a job to give us this behavior. Job job = new Job("Online Reservation") { protected IStatus run(IProgressMonitor monitor) { // Make a reservation // ... setProperty(IProgressConstants.ICON_PROPERTY, getImage()); if (isModal(this)) { // The progress dialog is still open so // just open the message showResults(); } else { setProperty(IProgressConstants.KEEP_PROPERTY, Boolean.TRUE); setProperty(IProgressConstants.ACTION_PROPERTY, getReservationCompletedAction()); } return Status.OK_STATUS; } }; job.setUser(true); job.schedule(); Let's assume that the purpose of the above job is to make a reservation for the user in the background. The user may decide to wait while the reservation is being made or decide to run it in the background. When the job completes the reservation, it checks to see what the user chose to do (line ). The public boolean isModal(Job job) { Boolean isModal = (Boolean)job.getProperty( IProgressConstants.PROPERTY_IN_DIALOG); if(isModal == null) return false; return isModal.booleanValue(); } This method checks the However, if the user chose to run the job in the background, we want to configure the job so that the user is given the indication that the job has completed. This is done in line by setting the protected Action getReservationCompletedAction() { return new Action("View reservation status") { public void run() { MessageDialog.openInformation(getShell(), "Reservation Complete", "Your reservation has been completed"); } }; } When the user clicks on the results link, the action is run, resulting in the following dialog. It is worthwhile to look at the protected static void showResults() { Display.getDefault().asyncExec(new Runnable() { public void run() { getReservationCompletedAction().run(); } }); } In this case, we run the same action but we do it using an There are a few other useful It is possible that multiple jobs are part of the same logical task. In these cases, the jobs can be grouped in a single entry in the progress view using a progress group. Grouping is accomplished using the Ensuring Data Structure IntegrityThe scheduling rules presented so far are helpful for ensuring exclusive access to resources in an Eclipse workspace. But what about ensuring the thread safety of other data structures. When a data structure is being accessed by multiple threads concurrently, there is a possibility that the two threads will interfere in such a way that will cause the data structure involved to become corrupt. We are not going to go into the details of how this could happen but instead leave that to the many books already published on that subject. We will, however, present one of the mechanisms that Eclipse provides to help in this situation. Ensuring the integrity of internal data structures is the responsibility of the plug-in that maintains the data. In cases where there is no interaction between plug-ins, the facilities provided by Java? (synchronized blocks and Object monitors) may be adequate. However, the use of these facilities can easily lead to deadlock in cases where there is interaction between multiple plug-ins that use locking of some kind to ensure the thread-safety of their data. The org.eclispe.core.runtime plug-in provides a lock implementation that can be used to ensure data structure integrity. An instance of a lock can be obtained in the following way. private static ILock lock = Platform.getJobManager().newLock(); Once you have a lock, each uninterruptible data structure access can be wrapped in the following way to ensure that no two threads are in a critical section at the same time. try { lock.acquire(); // Access or modify data structure } finally { lock.release(); } A call to the There is also an The advantage of using the locking mechanism provided by the Runtime plug-in is that it has deadlock detection and recovery with other locks and scheduling rules. This means that, in those cases where deadlock occurs, the system will recover. This is better than the alternative, which is to have a frozen application that must be killed and restarted. However, it is not ideal as the deadlock is handled by relinquishing the locks held by one of the offending threads until the other offender has completed, at which time the former thread will be allowed to continue. In many cases, this will not cause a problem but there is a possibility of data structures being corrupted. Observing the following two rules will help reduce the risk of deadlock. Always obtain locks in the same order. For example, obtain scheduling rules before obtaining internal locks. Don't call client code when you hold a lock, whether it be a scheduling rule, an ordered lock, a synchronized block or any other locking mechanism. The only exception to this rule is when the client is made aware of the locks that are held. For instance, during a post change delta, the workspace lock is held. It is up to the client who implements a delta handler to ensure that they do not obtain any locks out of order. Another main feature of SummaryIn this article, we have introduced the Eclipse 3.0 Jobs API. Jobs provide an infrastructure for doing work in the background. Jobs can be scheduled to run immediately or after an elapsed time and give notifications of changes in state (i.e., from scheduled to running to done, etc.). They can also be grouped into families in order to perform operations on a group of jobs (e.g., cancel). Included in the Job infrastructure is the concept of scheduling rules which are used to handle multiple jobs that contend for the same resources. Multiple jobs that have conflicting scheduling rules cannot be run concurrently. Scheduling rules can be combined using a The Eclipse file-system resources are scheduling rules, which allows them to be used with the Jobs API to schedule jobs that modify file-system resources. Rules are only required when modifying resources. Another change to Eclipse resource management is in how resource deltas are handled. Deltas are fired periodically in order to allow for a responsive user interface. Along those same lines, jobs have been given several properties that are used to configure how feedback of job progress and completion is shown to the user. You should check out the org.eclipse.ui.examples.jobs plug-in from the /cvsroot/eclipse repository on dev. (:pserver:anonymous@dev.:/cvsroot/eclipse). It has examples of all the things included in this article. Once you have the examples loaded, you can experiment with various settings using the Job Factory view (which can be opened using Show View>Other from the main menu and selecting Progress Example>Job Factory). |
|