Multi-camera real time 3d modeling Intelligent system for IKM

Multi-camera real time 3d modeling Intelligent system for IKM

Multi-camera real time 3d modeling Intelligent system for IKM

The Challenge

IKM approached us with an idea of an idea to make something that would be able to map humans and their environment captured from a camera to a virtual 3D representation.

We got excited, and thus began this journey of innovation, mad development, and norm breaking intelligent models and we were able to create HUMANS 3D.

The Concept

We had to map the physical environment of a city, and people to 3D space. The 3D environment was to hold everything, both indoor and outdoor scenarios. 

We had to come up with a mechanism that would show people in the 3D virtual environment based on their real-world coordinates. We had to get everything right, the pose, texture, and the precise location.

Solution

We created a python desktop application with a super intuitive and interactive UI. The architecture was designed into a fully scalable solution which could be extended to include data from other sensors like LIDAR, LASER Cameras, etc. 

  • The outdoor model
    We first analyzed camera feeds with our computer vision models to segment people, objects and to get a dense mapping of human poses. Then this camera feed was mapped to the 3d environment using our calibration app. The final result was people walking inside a Google Earth like 3D environment.

  • The indoor model
    We used depth information from the camera to generate 3d maps of the building. Once a 3d map was obtained, we placed models of people from our deep learning module in the indoor environment. 

    All the camera feeds we used were gathered in real time and ran in real time too on an nVidia Tesla GPU.

Solution

We created a python desktop application with a super intuitive and interactive UI. The architecture was designed into a fully scalable solution which could be extended to include data from other sensors like LIDAR, LASER Cameras, etc. 

  • The outdoor model
    We first analyzed camera feeds with our computer vision models to segment people, objects and to get a dense mapping of human poses. Then this camera feed was mapped to the 3d environment using our calibration app. The final result was people walking inside a Google Earth like 3D environment.

  • The indoor model
    We used depth information from the camera to generate 3d maps of the building. Once a 3d map was obtained, we placed models of people from our deep learning module in the indoor environment. 

    All the camera feeds we used were gathered in real time and ran in real time too on an nVidia Tesla GPU.

How it Works – An Overview

Challenges

Like any other project, there were a few challenges but this project was novel. Presenting the challenge of nothing to learn from any history implementations. Everything we were doing, we were doing out of the churned-out thought processes and rigorous discussion sessions of our experts.

The first challenge presented itself when we converted people to 3D models. The movement was super jittery and not life-like. 

The second issue came with the 3D environment. It had to run smoothly in Python and not many open-source 3D models were available that would load correctly in Python. We had to implement custom fixes and converters for those formats. 

Then, it became very difficult to achieve real time performance with high polygon 3D graphics and deep learning models running in parallel.

Technologies used

  • Python

  • Panda3D

  • Trimesh

  • Kivy

  • ZED 2 SDK

  • PyTorch

  • OpenCV

  • SMPL-X

  • PyGame

 Get in Touch With Us 

(+92) 51 8356104

Fazal Technology Park, Plot#395, 396, I-9/3 Industrial Area, Islamabad, Islamabad Capital Territory 44000

support@revolveai.com

    Ranked & Trusted By

    Ranked & Trusted By

      Ranked & Trusted By

      Contact Us

        Ranked & Trusted By