Rapid Product Development

Rapid Product Development is what I like to call the process developed by Don Reinertsen

In his first books ‘Developing Products in Half the Time’ & ‘Managing the Design Factory’ Don Reinertsen introduced us to the importance of Rapid Product Development, and in particular the concept of ‘Cost of Delay’ – a method to put a price on programme delays, by factoring in the lost opportunity cost.

In Manufacturing Lean, (and also Theory of Constraints) the concept of Flow is an important one. Don Reinertsen has incorporated this idea into his theory of Lean Product Development, by utilising Queueing Theory, and Batch Sizes amongst other things to show us how to obtain maximum flow thorough the Product Development process, and hence increase efficiency and minimise the cost of delay.

Key aspects of Rapid Product Development are

  • Economic Decision Making
    • Cost of Delay
  • Queues
    • Queuing Theory
    • Focus on queue size; know it & reduce it
  • Variability
    • Cope with variability
  • Batch Size
    • Reducing batch sizes reduces queues
  • WIP Constraints
    • Reducing WIP reduces queues
  • Cadence, Synchronisation & Flow
    • Maintaining flow
  • Fast Feedback
    • Learn quickly & adjust
  • Decentralised Control
    • Empower teams to make decisions
    • Know the commander’s intent

Queueing Theory

Agner Krarup Erlang, a Danish engineer who worked for the Copenhagen Telephone Exchange, published the first paper on what would now be called queueing theory in 1909. In Queueing Theory, different types of queues are described by a notation such as M/M/k

  • M stands for Markov or memoryless and the first M means arrivals occur according to a Poisson process. The second M describes the service time distribution and and means the service requirements are exponentially distributed
  • D, if used, stands for deterministic and means jobs arriving at the queue require a fixed amount of service
  • k describes the number of servers at the queueing node (k = 1, 2,…). If there are more jobs at the node than there are servers then jobs will queue and wait for service.

Little’s Law for a stable, i.e. steady state system says

Wait Time = Length of Queue/Processing Rate

For an M/M/1 Queue (i.e. 1 server), the waiting time goes up as the utilisation goes up. (See below). A similarly shaped result (though with different absolute values) is obtained for other numbers of servers.

Queueing Theory

 

What this means is that if you aim for high utilisation of your resources, you are bound to suffer longer waiting times.

The queue size doubles:-
Going from 60-80%
Again from 80-90%
Again 90-95%
95% Capacity Utilisation = 95% Queue Time !

Project delays are often due to waiting time due to overloaded resources in one or more areas, so increase those resources, or reduce the number of projects using them , and because projects do not usually provide a steady flow, have some slack to be able to cope with the peaks.

Managers don’t like to have their people and equipment working at less than 100% utilisation, which they equate to 100% efficiency and maximum profitability.

However if you can’t get the work through the system at maximum speed because of bottlenecks and unsteady demand, then actually you are not operating at maximum profitablity

If you would like to discuss this further, please contact us.