JOB-SHOP SCHEDULING OPTIMIZATION WITH STOCHASTIC PROCESSING TIMES

Http://www.ijetmr.com©International Journal of Engineering Technologies and Management Research [73] JOB-SHOP SCHEDULING OPTIMIZATION WITH STOCHASTIC PROCESSING TIMES Jaber S. Alzahrani *1 *1 Department of Industrial Engineering, Engineering College at Alqunfudah, Umm Al-Qura University, Saudi Arabia Abstract: In this study, a job shop scheduling optimization model under risk has been developed to minimize the make span. This model has been built using Microsoft Excel spreadsheets and solved using @Risk solver. A set of experiments have been also conducted to examine the accuracy of the model and its effectiveness has been proven.


Introduction
Job Shop Scheduling (JSS) problems with random processing times under various costs have been considered by researchers for its subsequent costs of the uncompleted job on its due time, to minimize the average value of its total costs. The general procedure of heuristic decision-making rules is used in situations when several jobs are ready to be served on one machine [4][5][6]. Flow Shop Scheduling (FSS) with random processing times has been studied by Singer [3] to minimize the expected total weighted tardiness. Moreover, a simulation-based genetic algorithm for solving a job shop with random processing times has been proposed by Yoshitomi [7] in order to minimize the expected makespan. Yoshitomi and Yamaguchi [8] have enhanced this genetic algorithm by applying a new cross over operator. In these two algorithms, solutions having very high frequency through all generations are selected as good solutions. Tavakkoli-Moghaddam et al. [9] have also considered a job shop with random operations, where the time difference between the delivery and completion of jobs as well as related operational or idle cost of machines that must be minimized. They have presented a hybrid method consisting of a neural network to generate an initial solution and a simulated annealing algorithm to improve the quality of the initial solution. Some authors have developed exact and heuristic algorithms for job shop scheduling problems with the makespan or mean flow time criterion subject to random processing times, which can take any real value between known lower and upper bounds [10][11][12]. Luh et al. [13] have presented a job shop approach based on a combined Lagrangian relaxation and stochastic dynamic programming by taking into consideration uncertain arrival times, processing times, due dates, and part priorities, in order to achieve minimizing of expected job tardiness and earliness cost. Neumann and Schneider [14] have proposed heuristics based on aggregate schedules for a job shop scheduling problem with stochastic precedence constraints, where the expected makespan is to be minimized.
Finally, Alcaide et al. [15] have studied open shop scheduling problems of random processing times, breakdowns and repair times, where the expected makespan must be minimized. They developed a general dynamic procedure that converts any problem into a set of scheduling problems without breakdowns.
In the recent literature, two new criteria have been brought to the attention of researchers for their consideration: robustness and stability [1]. Robustness means that schedule performance is still acceptable when something unforeseen happens. However, Stability means that schedule does not deviate significantly due to disruption and revision. In general, there are two methodologies to deal with the uncertainty in a scheduling environment: proactive and reactive scheduling. Incorporating the knowledge of uncertainty at the decision stage, proactive approaches focus on generating more robust predictive schedules to minimize the effects of disruptions. On the other hand, reactive scheduling algorithms are implemented at execution time to adjust the schedule according to the real-time situation when the uncertainty is realized, or disruptions occur. This paper proposes a proactive approach to generate a long-term initial schedule under uncertainty to jointly determine the production planning and preventive maintenance to simultaneously optimize system biobjective of robustness and stability.
Zhiqiang Lu et al. [2] addressed the problem of finding a robust and stable schedule for a single machine with availability constraints. The machine suffers unexpected breakdowns and follows the Weibull failure function.
The mission of the scheduling process is to optimally allocate the suitable equipment to perform the required jobs over a period to achieve the business goals. Therefore, many efforts have been devoted to solve most optimal Job Shop Scheduling Problems (JSSP), as most of the researches aimed at minimizing the maximum completion time. JSSP is an NP-hard problem; so, it is difficult to find an exact solution in a reasonable computation time [16]. Number of optimization methods have been developed to solve JSSP, Tabu Search [17,18], Simulated Annealing [19,20], Genetic Algorithms [21,22], Particle Swarm Optimization [23,24], Ant Colony Optimization [25,26], differential evolution algorithm [27], Memetic Algorithm [28], Mathematical Programming [29,30], and Goal Programming [31].
Genetic Algorithm has the advantage to solve scheduling problems. It can reach optimal or suboptimal solutions. It has gained the advantage of getting the global optimization solution in a huge population.
Risk analysis in @RISK is known as a quantitative method that seeks to determine the probability distributions of outcomes resulting from decisions. In general, the techniques in a @RISK risk analysis encompass four steps: 1) Developing a Model. Define your problem or situation in an Excel model. The first three steps of @RISK help in providing a powerful and flexible set of tools in Excel to facilitate model building and risk analysis. The results generated by @RISK can then be used by a decision maker to choose a course of action.
The aim of this paper is to present a job shop scheduling (JSS) optimization model under risk is developed to minimize the makespan. The model has been built using the Microsoft Excel spreadsheets and solved using @Risk.

Problem Description and Assumptions
The goal of this model is to Solve a JSS problem where spreadsheet-based commercial genetic algorithm solver "@Risk" [32] is used to optimize the makespan function in below mentioned Equation 4.
Classical Job Shop Scheduling Problems (JSSP) considers the allocation of n jobs to m different machines or equipment. Each job has to undergo multiple operations in various equipment, with its own set of processing times and routing characteristics. The processing time of each job on an equipment Phj is known as well as the due date for each job Dj.
The model assumes the following: • The processing times are stochastic; • Each job has its own due date; • Each job will not visit "undergone" by the same equipment twice; • All jobs are ready for processing at time zero; • All equipment are available at time zero; • Each equipment can process only one job at a time; • Only one job can be processed by an equipment at any instant in time; • Set-up time for any operation is included in the processing time; • The required transportation time for the movement of jobs between equipment is assumed to be negligible; • The operation couldn't be interrupted; • There are no precedence constraints among operations of different jobs.

Objective Functions
The objective of this model is to minimize the makespan which has been defined in Equation 1.

Computational Results and Analysis
In this section, the results of the proposed model application are introduced. In consequence, the model has been solved using @Risk optimizer which operates in an Intel® Core™ i3-2310M CPU @2.10 GHz (3 GB of RAM). The GA parameters include; population size N = 50, number of generations G = 40,000, probability of crossover Pc = 0.5, and probability of mutation Pm = 0.1.
To avoid doubt, it is meaningful to distinguish between the terms "trials" and "iterations" in the RISK Optimizer process. The overall process involves a sequence of "trials," where a given set of adjustable cell values is tested on each trial. For the evaluation of the target cell (and the constraint cells) for a trial, a standard @RISK simulation is run for a specified number of "iterations." This distinction helps in understanding why RISK Optimizer can take some time. For example, it might require 1000 trials to converge to optimality, and each of these trials might require a 500-iteration simulation. There will be a lot of computer calculation, especially for complex models.
As in traditional optimizers, constraints can be entered in RISKOptimizer. When the value of the constrained cell does not change during a simulation, the constraint is called non-probabilistic. Otherwise, it is called probabilistic. A probabilistic constraint is based on a statistic of the distribution of the cell's values, such as "the mean of A1 <= 100." RISKOptimizer calculates the statistic at the end of each simulation to decide whether the constraint is satisfied or not. The model accuracy and capability are verified by solving and analyzing a 5J*4M problem. The model inputs, processing sequences, and durations are assumed as shown in Tables 1.   (16) Initially, the problem has been solved as a deterministic to set the base of comparison and discussion for the stochastic solutions. The Gantt chart of the deterministic case is shown in Figure  1. It may be noticed in the Gantt chart that there is a slack time of 18 units of time after finishing the second operation of the second job on the second machine. So that, the effect of changing the processing time of this operation into stochastic for the optimal makespan will be studied.
The duration P22 is assumed to follow a uniform distribution among a minimum value 27, a maximum value 33 and random value as shown in Figure 2.    It may be noticed in the Gantt chart in Figure 1 that the second operation of job 5, machine 3 is one of the critical operations. Therefore, this experiment has to study the effect of changing the processing time of this operation into stochastic for the optimal makespan. The duration P53 of this critical operation is assumed to follow a uniform distribution of a minimum value by 14, a maximum value by 18 and random value as shown in Figure 5.  Figure 6 shows the operation finishing time in which it has a uniform distribution shape. As this operation is one of the critical operations, the optimal makespan will not have a single value as in the previous case but it will have a probabilistic distribution of uniform shape as shown in Figure  7.  In this experiment, the processing times of all jobs are assumed to be stochastic and follow a normal distribution. Since the normal distribution has two parameters, the standard deviation is given as a ratio to the mean; the ratio of the standard deviation to the mean is known as the variability and assumed to be 10%; N (µij, σij) where σij equals 10 % of µij. Figure 8 shows the distribution of resulted optimal makespan by 10% variability. In addition, Figure 9 depicts the correlation between the resulted makespan and all other uncertain processing times in which it is obvious that the most correlated operations to the makespan are those of O53 and O42 which represent the first and last operations of the critical path.   In Figure 10, the change in makespan mean across a range of uncertain processing times is studied and presented. It is noticed that duration P53 has the greatest effect on the makespan which may assure the last conclusion in Figure 9.

Conclusion
The JSS optimization problem has been successfully solved to optimize the makespan under the situation of the uncertainty of the processing times of all jobs.
A developed JSS optimization model has been built using the Microsoft Excel spreadsheets and solved using @Risk solver. A set of four experiments have been conducted to solve the problem using deterministic processing times, uncertain non-critical operation processing time, uncertain critical operation processing time and uncertainty of all operations processing times.
Finally; the accuracy of the model and its effectiveness have been proven through the analysis of the obtained results.