THE POISSON PROCESS
G A Vignaux
Abstract
These notes examine the Poisson Process which models ``random'' arrivals to
Markovian queue models. The properties of the process, such as the
distribution of time between events, are studied. The Erlang
which appears as the distribution of time between one event and the
k-th next event is described.
This is Revision: 2.1 .
Contents
1 Introduction
2 The Poisson Process
2.1 Merging (or adding) two Poisson streams
2.2 Partitioning two streams
2.3 The probability that NO events occur in an interval T
2.4 The probability of n events in an interval T.
2.5 How long to go before the next event?
2.6 How long between events?
2.7 The distribution of time to the k-th next event
3 The Erlang Distribution, Ek
3.1 The Erlang distribution as sum of exponentials
References
- [(Hillier and Lieberman)(1990)]
-
Hillier, F. S. and Lieberman, G. J. (1990).
Introduction to Operations Research.
McGraw-Hill, 5th edition.
- [(Taha)(1992)]
-
Taha, H. A. (1992).
Operations Research: An Introduction.
Macmillan, 5th edition.
- [(Winston)(1994)]
-
Winston, W. L. (1994).
Operations Research, Applications and Algorithms.
Duxbury Press, 3rd edition.
1 Introduction
The Poisson Process is the model we use in Queuing theory for
``Random arrivals''. It is in many ways the simplest model we could
use for arrivals into a queue system because it assumes that the
probability of an arrival in a small interval of time depends only on
the size of the interval, not on any history of the process to that
time.
The main reference for these notes is [(Hillier and Lieberman)(1990)]. Other OR books,
such as [(Taha)(1992)] and [(Winston)(1994)] also explain the model. Many
other books on statistics and queue models deal with this important
stochastic process.
2 The Poisson Process

Small time interval h. Prob(event in interval h) = lh
- The probability of an event in h depends only on the length
of h and not on its position, t.
- Hence the system has no memory - it does not matter how long since an
event last occurred. Constant probability of an arrival.
The process has no memory This is the Markovian property.
- We choose to make h and l small enough so that at most
one event will occur in h.
- l is called the event rate or arrival rate

2.1 Merging (or adding) two Poisson streams

Prob{event in stream 1 is in h } = l1 h
Prob{event in stream 2 is in h } = l2 h

Prob{some event is in h}
Merged stream acts like a Poisson process with rate
2.2 Partitioning two streams

Whenever an event (arrival) occurs we decide to divert it into A
or B stream according to fixed probabilities, pA
and pB.
Prob{A event in h}
|
|
|
|
prob{event in h} × Prob{choosing A stream} |
| |
|
|
|
The A stream acts like a Poisson stream with a rate of events = pAl
2.3 The probability that NO events occur in an interval T

P{0 events in interval [t, t+h] } = 1 - lh where
lh << 1
put
Since probability of an event in h is independent of any others:
and as h ® 0
integrate
But note that if t ® 0, p0(t) ® 1 (
certain to have 0 events!)
2.4 The probability of n events in an interval T.

|
|
|
|
Prob { n events in [0, T] } |
| |
|
|
Pr { 0 events in [0, T] } = e- lT |
| |
|
| |
|
| |
|
| |
|
|
(int over all t from 0 to T), h ® 0 |
| |
|
|
|
ó õ
|
T
0
|
e- lt l dt e- l(T - t) |
| |
|
| |
|
| |
|
The probability of 2 events in [0, t] is
|
|
|
|
|
ó õ
|
Prob{1 event in [0, t] } |
| |
|
| |
|
|
×Prob{0 events in [t, T-t] } |
| |
|
|
(int over all t from 0 to T), h ® 0 |
| |
|
|
|
ó õ
|
T
0
|
(lt e- lt)(l dt) e- l(T-t) |
| |
|
| |
|
|
|
This leads us to the Poisson distribution

2.5 How long to go before the next event?

Prob{next event after 0 occurs in [t, t+h] }
|
|
|
|
Prob{0 events in [0, t] } ×Prob {1 event in [t, t+h] } |
| |
|
|
|
thus p.d.f.

NOTE
- exponential distribution
- E(t) = 1/l
- std. deviation = 1/l
- variance = 1/l2
2.6 How long between events?

Since no memory the zero point can be an event.
Thus time between events is also exponential, l.
the average time between events = 1/l.
2.7 The distribution of time to the k-th next event

Prob{kth next event is in [t, t+h] }
|
|
|
|
Prob{k-1 events in [0, t] } (lh) |
| |
|
|
|
Thus p.d.f. is
|
fk(t) = |
l(lt)k-1 (k-1)!
|
e-lt |
|
This is an Erlang distribution with mean = k /l
and variance = k/l2. It has shape parameter of k.
3 The Erlang Distribution, Ek
The Erlang Distribution is important in queueing
theory and simulation.
Named after A K Erlang, a Danish telephone engineer, considered to be
the founder of queueing theory. He wrote papers in 1917-1918.
The Ek is a continuous rv with density function
|
f(t) = |
(mk)k (k-1)!
|
tk-1e-kmt |
|
A special type of Gamma distribution with an integer shape
parameter. 2 parameters control its distribution (the scale parameter, m
and its shape parameter, k). Mean is 1/m and its variance 1/km2.


- A form of Gamma distribution with an
integer parameter.
- when k = 1 we have the exponential
|
f(t) = |
(m)1 1!
|
t0e-mt = memt |
|
- mean 1/m
- variance 1/km2
- Std. deviation = 1/Ökm
The Erlang distribution gets narrower as k increases. As k
tends to infinity the variance tends to zero. Thus the Erlang can also
represent the ``distribution'' of a constant - a deterministic value.
An intermediate value of k gives a shape between exponential and
constant.
3.1 The Erlang distribution as sum of exponentials
The Erlang can be thought of as the convolution (that is the sum of) k
independent exponential random variables (each with the same mean).
To get a mean of 1/m these k exponentials must each have mean
1/km.
The variance of each exponential is 1/k2m2. Thus variance of
their sum is k/k2m2 = 1/km2.
It is an important distribution in queueing theory because
- we can simulate an Erlang rv , Ek, by summing k exponential
rvs,
- we can consider it as if it was a sequence of phases
each of exponential type. Thus we can use the Markovian property and
get some theoretical results with the Erlang. This is what Erlang
did.
File translated from
TEX
by
TTH,
version 2.60.
On 10 Jun 2002, 14:27.