Occam-pi is a programming language that started development as Occam in the early 80s. It is a concurrent language, basically it works very well for real time programs and can make better use of multiple processors or cores than imperative languages can.
In a language such as Java you may have come across threads, these allow different processes to be run separately to other processes. This is an example of concurrency in the imperative paradigm, but unfortunately in Java you are required to control the threads, which is very hard to code efficiently.
As an example for why concurrency is useful, look outside. In a flock of birds (or a group of humans), each bird will react based on the other birds and the environment around it. Each bird is a single life. There is no overarching entity that controls how each bird moves or reacts. So why should there be in programming?
Another example is scalability. If you write a program in Java or C, it might work in a single or a few threads depending on how you code it. However, if you put that program onto a super computer, it would not use anywhere near the available processing power. In a concurrent language, the program would automatically scale across the processors due to the modular aspect of the language.
In occam-pi you can think of each process as a component, one that can be removed and added elsewhere at anytime; rather than coupled to the rest of the program. This is similar to how good design practices work in Object Oriented languages but kind of on steroids.
Every process is a separate thread.
This is the first post in a series about concurrency. It is based on lectures by Peter Welch and Fred Barnes, posted with permission from both of them.
Update: I'm afraid this series never went ahead.