[00:00:00] Gary Fischer, PE: And we’re ready to move on to our next presentation here, Zahra. Go ahead and leave your turn on your video there. The next research project report out is really interesting and paving the way for the future someday, and I don’t think the not too distant future quantity surveying is gonna be a thing of the past and even manual physical progress assessment.
[00:00:25] Gary Fischer, PE: It’s all gonna be done. Auto, it’s all gonna be automated. And so I’d like to introduce Zahra Mazlaghani. How’d I do? You’re on mute there.
[00:00:37] Zahra Mazlaghani: Yes, Sorry.
[00:00:38] Gary Fischer, PE: All right. She’s a PhD student in the Department of Civil and Environmental Engineering at Stanford, where she also earned her Master’s of Science degree in Civil and Environmental Engineering.
[00:00:48] Gary Fischer, PE: Her PhD research focuses on advancing the use of computer vision and machine learning techniques to detect and classify construction activities. Directly from video footage and automatically extract production data by automating the extract extraction of production data. Her work enables development of real time production system modeling, as well as project production control, empowering construction teams to make data-driven decisions and real time that improved project performance on construction sites.
[00:01:18] Gary Fischer, PE: I think you’re gonna find this real interesting, so I’ll turn it over to you.
[00:01:23] Zahra Mazlaghani: Okay. And hi everyone. Can you see my screen?
[00:01:29] Gary Fischer, PE: Yep, we can see you.
[00:01:31] Zahra Mazlaghani: Okay. So first I’m gonna give you a brief summary of my presentation. In order to have an an effective project production control, we need the reliable, continuous, granular data and work process interruptions.
[00:01:48] Zahra Mazlaghani: However capturing reliable, continuous granular data and work process disruption has been challenging. So in my presentation, I’m gonna show you how easily counter vision algorithm can, capture continuous granular data to make an effective pro project production control. When we think about an upcoming construction project, the first question we ask is what does a production production system look like?
[00:02:22] Zahra Mazlaghani: To answer these questions, we begin by defining the master reschedule. Then we move into mapping and modeling the production system, which is a representation of how the work will flow. Next we simulate, analyze, and optimize that model. By running a simulation, we can see how the system behaves under different conditions.
[00:02:47] Zahra Mazlaghani: And once we identify the configuration that achieves our performance target, we can like it in that configuration includes the number of equipment, labor location of resources and the. Amount of working process the system can carry. And once execution begins we shift into control. Control is not just about measuring.
[00:03:13] Zahra Mazlaghani: It’s about enforcing the rules and parameters we set to make sure the system behaves the way we intended. And then through field data capture, we can continuously improve. This is. The systems in this process, the, most important part is to capture reliable and continuous actual granular actual field data to make an effective production system optimization and, production control.
[00:03:47] Zahra Mazlaghani: So I’m gonna get get deeper in, each SF of this process using an example in next slide. This is a typical floor of 41 casting place concrete project divided in three zones, left, middle, and right zones. In order to build a typical floor, we need some operations or activities such as lift tunnel framework.
[00:04:15] Zahra Mazlaghani: Slab, concrete pouring and so on. So in this dynamic has in place concrete environment unexpected or unplanned work workflow interruption might happen to the activities. So site manager has to be able to continuously and reliably detect and capture workflow interruptions and actual granular data of the activity so that they can continuously and reliably make a efficient control up.
[00:04:44] Zahra Mazlaghani: Work and optimization of resource assignments for subsequent shifts. So in this pro in this project I’m gonna focus on the production system in the right zone. So this is the, production process in the right zone that is mapped. And you can see this production process starts with install care from workflow and then install wall and column rebar and so on.
[00:05:21] Zahra Mazlaghani: Once a production process is mapped, the next step is to simulate, analyze and optimize optimize it using actual field data. On the left, you can see the production model, and on the right there are, outputs from this simulation for this project. By running this simulation the construction manager could recognize which resources bottleneck and also they could test different configuration of con configurations of resources or working process to make sure they could meet the schedule.
[00:05:58] Zahra Mazlaghani: My lesson before moving into work execution. And here we see how production control and improvement actually work in practice. For this project, the, they started with. Production scheduling for each activity and then move to daily production planning, ba regarding the availability of resources for upcoming works.
[00:06:26] Zahra Mazlaghani: Then the crew executed the work and then they captured the data from the work execution. That data was compared against the plan by checking the status of work. And finally it. Fed into analytics based continuous improvement to reduce variability and make the production system more reliable over time.
[00:06:50] Zahra Mazlaghani: However, after, implementing production control, they saw that the actual schedule exceeded plan schedule. And and this raises the question, what, what did what did this happen? Why did this happen And what was missing in this in this approach? The problem was that production, contract and improvement relied on foreman and the workers who verbally reported a status of work or manually sent photos of completed task, which is error prone.
[00:07:29] Zahra Mazlaghani: This is because some tasks might have been falsely marked as finished. Some tasks might have been missed, which results in unreliable the field data or missing workflow interruptions. Production control function with an unreliable and lower level of data granularity and the causes and effects of workflow interruption or constraints remains, on. However, we know that at the operational level, we need the higher level of the data to ensure effective control of work and optimization of labor, and equipment assignment as assignments for subsequent shifts. Regarding the problem that I explained in that project.
[00:08:30] Zahra Mazlaghani: There are two research questions. The first research, the first research question is, how can a higher level of granular data and workflow interruption be automatically captured and analyzed? And the second one is how can analyzing the higher level of granular data and workflow interruption help close the information granularity gap between operational and the management levels?
[00:08:59] Zahra Mazlaghani: In order to answer these questions in this presentation I focus on a slab concrete pouring activity because this activity is a critical determinant of project success in casting place concrete work. The vis algorithms has proved that the most pro the most promising method for activity detections, and existing counter vision algorithms can detect worker and equipment, activities separately to extract automatic and reliable.
[00:09:42] Zahra Mazlaghani: Data. However, there is a gap about the existing comfort vision algorithm for activity detection, which is the current vision based or comfort vision methods cannot consider special temporal interaction between workers, equipment and materials, which is required for reliable activity classification for example.
[00:10:07] Zahra Mazlaghani: In this slide in there are two photos. And in these two photos the concrete boom is fixed. But in the left photo no worker is holding the nozzle, of concrete pool. So that’s why this activity is considered as an idle time. But however, in the right photo of one worker is holding the nozzle of concrete booms.
[00:10:36] Zahra Mazlaghani: So that’s why it’s considered as a pooling, pooling time. You can see that the, in this example, a special relationship between worker and nozzle is very important to classify these two activities. In this for for this research, I I, leveraged the object detection method and the graph neur graph neural network to make a new vision based model to detect the different, activities based on different interaction between object that I explained. Yellow version eight is a object detection model, which is used to detect just objects workers, equipment and materials. And the graph network is used to, I can, I classify activity based on different relationships between objects and the cases study is video footage of 41 S3 project.
[00:11:58] Zahra Mazlaghani: So first I decompose the slab concrete pouring into six sub activities based on a special temporal relationship between objects. The first sub activity is a preparation filling bucket when the nozzle of concrete when the nozzle of concrete boom is on top of a concrete bucket and it’s fixed regarding if, a worker is around or not. And the second sub activity is pouring time when just one, worker is holding the nozzle and. At least another worker is around that, worker. And the third act sub activity is concrete boom, idle time when no one is holding the nozzle and the nozzle is fixed.
[00:12:49] Zahra Mazlaghani: And the fourth sub activity is concrete boom, moving time when the, concrete boom is just moving and it’s not pouring. Also I considered work categories for these sub activities. I considered preparation filling bucket as a supportive work and, pouring. I consider pouring and moving time as direct work, and also I considered ideal time as unproductive work because later I’m gonna show you the work process interruption of this, activity.
[00:13:32] Zahra Mazlaghani: So that’s why I needed to classify them based on different work categories. And the fifth sub activity is post pouring closing the lead when just one worker is closing the lead and the nozzle is is just fixed. And the last one is post pouring, flushing out when the nozzle of concrete boom is on the top of the, another concrete bucket.
[00:14:03] Zahra Mazlaghani: And some workers are around it and both of them are considered as supportive work. And then I trained YOLO and graph neural networks based on different special temporary relationships, which is available in these six sub activities that I, explained. And you can see that the new vision based method was able to.
[00:14:37] Zahra Mazlaghani: Detect each sub activity successfully, and this is the preparation filling bucket.
[00:14:48] Zahra Mazlaghani: And you should see the essay on the top, and the essay means the sub activity, and this is the moving time.
[00:15:05] Zahra Mazlaghani: So just, detects each object and graph neural network can classify sub activity based on different spatial relationship that we define.
[00:15:33] Zahra Mazlaghani: And and this is a pouring time and. If you can see, the model is also able to count the number of workers who are actively involved in each sub activity.
[00:15:57] Zahra Mazlaghani: And this is idle time.
[00:16:06] Zahra Mazlaghani: As soon as any worker. Gets very close to the bottom of nozzle of concrete. Boom. The model recognize it as a pouring time, but as soon as soon as this worker walks away from nozzle the model detects it as a, as an idle time.
[00:16:36] Zahra Mazlaghani: And this is another sub activity, which is post pouring, flushing out.
[00:16:44] Zahra Mazlaghani: And this is another concrete bucket. And this is post spring, closing the lead, and you should pay attention to the worker closing the lead. And later he walks away from the nozzle and then the model can recognize it as a idle time. If you look at the estate and now it’s moving. And then after the model detected all six sub activities it automatically extracted actual granular, data for each sub activity.
[00:17:37] Zahra Mazlaghani: As you can see in this table, the, this granular data includes the location, start time, end time the average and maximum number of workers who are actively involved in each sub. In each sub activity. And after extracting this granular data, I analyze the, I analyze that first I obtained the granular sub activity time distribution.
[00:18:11] Zahra Mazlaghani: In s lab, concrete pouring. And I, and this slide shows that pouring time has the, maj majority of time compared to other sub activities. Other sub activities has less than 8% of total time. And since, pooling time is considered as a direct time, obviously in this in this plot.
[00:18:41] Zahra Mazlaghani: Direct time has the majority of time whereas support support and productive work has very low share of total time. This data analysis can give us a clear view of where improvement opportunity exists in the production system and the why and the, how many time spent on, on productive work or on support work and direct work.
[00:19:18] Zahra Mazlaghani: And then the construction manager can make a decision. And this is the timeline of sub activities. And this timeline rebuilds the actual sequence of direct software and on productive work. We can see that, we can see a line stretch of pooling work as a direct work followed by a small pockets of idle time and support tasks including post pouring, flushing out, and closing the delete. So understanding these sequence is essential for identifying, where there are interruptions optimizing flow and improving production control. Also I use the different duration classes here. I broke, I break down how long each sub activity typically lasts.
[00:20:25] Zahra Mazlaghani: Some sub activities like pouring and preparation are dominated, but long durations over 10 minutes while other sub activities such as moving an ideal time eye care, mostly in in short durations.
[00:20:44] Zahra Mazlaghani: Also direct work is a stable and long duration because of, pouring time. Whereas support and pro and unproductive work are highly fra fragmented. And since the model was able to count the number of. Workers who are actively involved in each sub activity. I obtained a workforce intensity.
[00:21:16] Zahra Mazlaghani: Per sub, activity. As you can see, pouring requires the highest workforce intensity and the support works such as post pouring clo closing lead and the flushing out needs, lower num number of workers and obviously ideal time doesn’t need any worker.
[00:21:47] Zahra Mazlaghani: So understanding these workforce patterns helps identify where labor or resource is a concentrated and where resource allocation should be improved. And at the end by extracting higher level of granular data and workflow interruptions we can help close the gap between field execution and management, decisions because we can analyze the reasons for task and non-completion measure the reliability of work execution, and, and track my lesson forecast trend. Together these insights help project teams make more informed decision and continuously improve execution performance. So thank you for your listen. Thank you for listening, and I’m ready for any questions.
[00:22:56] Gary Fischer, PE: Alright. That’s really good stuff. But we got, a little bit of time for questions here if we wanna open it up to the audience. That’s another, yet another good example of the application of Operation Science and where we’re going with automating production systems and production system optimization.
[00:23:19] Zahra Mazlaghani: Yeah.
[00:23:20] Gary Fischer, PE: Where are you going next with your research?
[00:23:24] Zahra Mazlaghani: I’m gonna I’m gonna in use the I’m gonna generalize this vision-based model for, to detect other cast in place activities. And then and then I’m gonna again generalize this model based on another project. So to get more and different granular data,
[00:23:53] Gary Fischer, PE: do you have a project lined up or are you shopping?
[00:24:01] Gary Fischer, PE: Alright, I’m not seeing any other que Oh here, we go. How long did it take to develop the steps, train the models and execute the analysis?
[00:24:12] Zahra Mazlaghani: First I think totally. Maybe around three months because first I used just YOLO object detection model, and then I saw that, it didn’t work well because.
[00:24:32] Zahra Mazlaghani: I couldn’t generalize it. And and that’s why I research more and I found out that, oh graph neural network is works well because it can reduce the number of image annotations and that is required for training the vision based model. Yeah, I think I was spent around. Three or four months.
[00:25:02] Zahra Mazlaghani: Totally. Yeah.
[00:25:04] Gary Fischer, PE: So if you were, you had the task to do something similar somewhere else, now that you’ve got this learning under your belt, how quickly could you do the same thing?
[00:25:14] Zahra Mazlaghani: Sorry, could you please repeat it? How quickly
[00:25:16] Gary Fischer, PE: could you do the same thing if you were at another similar site wanting to do the, a similar analysis now that you’ve got this under your belt?
[00:25:25] Zahra Mazlaghani: First I would need to. To observe the work process of each activity to make sure if they have this the same sub activities based on relationship between objects. Sure. And yeah. And then after finding it out, I can train the model quickly because yeah, it would be easy to train it
[00:25:55] Gary Fischer, PE: so it would be much faster the second time.
[00:26:01] Gary Fischer, PE: All right. Thank you very much for that. That was really interesting, insightful. It gives us an idea of where things are headed. And the capability that we’re gonna have in the not distant future.
[00:26:11] Zahra Mazlaghani: Thank you so much. Thank you.