The JavaTM Tutorial
Previous Page Lesson Contents Next Page Start of Tutorial > Start of Trail > Start of Lesson Search
Feedback Form

Trail: Essential Java Classes
Lesson: Threads: Doing Two or More Tasks At Once

Threads Summary

This chapter provides a great deal of information about using threads in the Java platform. This section summarizes where you can find various classes, methods, and language features that relate to threads.

Package Support for Threads

The java.lang package provides basic support for threads with the following classes and interfaces: The java.util.concurrent.* packages define a wide range of concurrency utilities including:
  • Task scheduling framework: The Executor framework is a collection of interfaces and classes for standardizing invocation, scheduling, execution, and control of asynchronous tasks according to a set of execution policies. Implementations are provided that allow tasks to be executed within the submitting thread, in a single background thread (as with events in Swing), in a newly created thread, or in a thread pool, and developers can create implementations of Executor supporting arbitrary execution policies. The built-in implementations offer configurable policies such as queue length limits and saturation policy which can improve the stability of applications by preventing runaway resource consumption.
  • Locks: While locking is built into the Java programming language via the synchronized keyword, there are a number of inconvenient limitations to built-in monitor locks. The java.util.concurrent.locks package provides a high-performance lock implementation with the same memory semantics as synchronization, but which also supports specifying a timeout when attempting to acquire a lock, multiple condition variables per lock, non-lexically scoped locks, and support for interrupting threads which are waiting to acquire a lock.
  • Synchronizers: General purpose synchronization classes, including semaphores, mutexes, barriers, latches, and exchangers, which facilitate coordination between threads.
  • Concurrent collections: The Collections Framework (discussed in Collections (in the Essential Java Classes trail), contains several concurrent collections, including the Queue and BlockingQueue interfaces, and high-performance, concurrent implementations of Map, List, and Queue.
  • Atomic variables: Classes for atomically manipulating single variables (primitive types or references), providing high-performance atomic arithmetic and compare-and-set methods. The atomic variable implementations in java.util.concurrent.atomic offer higher performance than would be available by using synchronization (on most platforms), making them useful for implementing high-performance concurrent algorithms as well as conveniently implementing counters and sequence number generators.
  • Nanosecond-granularity timing: The System.nanoTime method enables access to a nanosecond-granularity time source for making relative time measurements, and methods which accept timeouts (such as the BlockingQueue.offer, BlockingQueue.poll, Lock.tryLock, Condition.await, and Thread.sleep) can take timeout values in nanoseconds. The actual precision of System.nanoTime is platform-dependent.
  • Language Support of Threads

    The Java language has two keywords related to the synchronization of threads: volatile (which is not implemented in JDK 1.0) and synchronized. Both of these language features help ensure the integrity of data that is shared between two concurrently running threads. Multithreaded Programs discusses thread synchronization issues.

    Runtime Support of Threads

    The Java programming language has two keywords related to the synchronization of threads: volatile and synchronized. Both of these language features help ensure the integrity of data that is shared between two concurrently running threads. The section Synchronizing Threads (in the Essential Java Classes trail) discusses thread synchronization issues.

    Runtime Support for Threads

    The Java runtime environment contains the scheduler, which is responsible for running all the existing threads. The scheduler uses a fixed-priority scheduling algorithm, which usually means that at any given time, the highest-priority thread is running. However, this is not guaranteed. The thread scheduler may choose to run a lower-priority thread to avoid starvation. For this reason, use priority only to affect scheduling policy for efficiency purposes. Do not rely on thread priority for algorithm correctness.

    Other Thread Information

    This chapter has only scratched the surface on the topics of threads and concurrency control. For further information, see:

    Previous Page Lesson Contents Next Page Start of Tutorial > Start of Trail > Start of Lesson Search
    Feedback Form

    Copyright 1995-2005 Sun Microsystems, Inc. All rights reserved.