Facebook 3D tour using Oculus Rift

Facebook 3D tour using Oculus Rift

Let's meet is one of the academic projects we are mentoring at ESPRIT Mobile. It consists mainly on a 3D tour in your Facebook profile.

Game Development, Virtual Reality, Networking

  • 0 Collaborators

  • 1 Followers

    Follow

Description

‘Let‘s meet’ is a 3D application offering to user the possibility to get inside a 3D virtual world using Oculus headset, in this world the user will be able to discover a new way of using the social networks. The user will be experimenting a unique experience in a virtual 3D world divided into several sections, each section represents one of the Facebook services (friend list, pages, groups…) with the ability to perform all kind of actions related to the service (chat, comment, share, like…)

The project is mainly based on audiovisual aids techniques due to several indications within the environment allowing the user to establish written and movement interactions.

“Let’s Meet” has been conceived and developed with a major optimization reducing its weight in terms of increasing the response time so that the user won’t be waiting for so long in order to remain faithful to the virtue of social networks. To meet the needs of user comfort we have designed an environment with high ergonomics guiding the user with clear and simple indication so that he can easily fulfill any interaction.

Knowing that Facebook has bought the oculus Rift headset and considering his notoriety within worldwide society by scoring 1,750 millions of active users. We consider that ‘Let’s meet’ may have major impact and the scope to reach over millions.

As all PIM projects we are mentoring ( acronym for "Projet d'integration mobile"), this project is made by 4 engineering students coached by one mentor. We are 10 Associate professors/R&D engineers mentoring 120 engineering students per year.

Video

Medium 8526357ddcd3807c60e834e511c5d114

Sunil A. updated status

Medium 8526357ddcd3807c60e834e511c5d114

Sunil Ammiti

I completed my bachelor's degree in Information Technology. Working as a UI Developer for 2years. And know I'm master degree in computer science and engineering.

Medium profilev2

Chaplin M. updated status

Medium profilev2

Chaplin Marchais

Well its now 7am and I have been up for about 30 hours.... but the image recognition is now actually working in azure!! Now time to take a power nap and then do some optimization with Intel's awesome suite of tools!

Medium profilev2

Chaplin M. updated status

Medium profilev2

Chaplin Marchais

Fifth night in a row I find myself still in front of the computer at 2AM.... I think I got up at-least twice today though!! Progress... Oh well, the future doesn't build itself! ...... yet....

Medium mol

Moloti N. created project Intelligent Home Security: Africa Motion Content encoder decoder using Deep Neural Networks

Medium 18657e03 c017 4a3b b485 57589b45e7a5

Intelligent Home Security: Africa Motion Content encoder decoder using Deep Neural Networks

We propose the use of Drones to help communities enhance their security initiatives, to identify criminals during the day and at night. We use multiple sensors and computer vision algorithms to be able to recognize/detect motion and content in real-time, then automatically send messages to community members cell phones about the criminal activities. Hence, community members may be able to stop house breakings before they even occur.

Machine Intelligence Algorithm Design Methodology

AMCnet: https://github.com/AfricaMachineIntelligence/AMCnet https://devmesh.intel.com/projects/africa-motion-content-network-amcnet

We propose a deep neural network for the prediction of future frames in natural video sequences using CPU. To effectively handle complex evolution of pixels in videos, we propose to decompose the motion and content, two key components generating dynamics in videos. The model is built upon the Encoder-Decoder Convolutional Neural Network and Convolutional LSTM for pixel-level prediction, which independently capture the spatial layout of an image and the corresponding temporal dynamics. By independently modeling motion and content, predicting the next frame reduces to converting the extracted content features into the next frame content by the identified motion features, which simplifies the task of prediction. The model we aim to build should be end-to-end trainable over multiple time steps, and naturally learns to decompose motion and content without separate training. We evaluate the proposed network architecture on human AVA and UCF-101 datasets. We show state-of-the art performance in comparison to recent approaches. This is an end-to-end trainable network architecture running on the CPU with motion and content separation to model the spatio-temporal dynamics for pixel-level future prediction in natural videos.

// We then use this AMCnet pretrained model on the Video feed from the DJI Spark drone, integrated with the Movidius NCS to accelerate real-time object detection neural networks.

See More

No users to show at the moment.