Facebook 3D tour using Oculus Rift

Facebook 3D tour using Oculus Rift

Let's meet is one of the academic projects we are mentoring at ESPRIT Mobile. It consists mainly on a 3D tour in your Facebook profile.

Virtual Reality, Networking, Game Development

  • 0 Collaborators

  • 0 Followers

    Follow

Description

‘Let‘s meet’ is a 3D application offering to user the possibility to get inside a 3D virtual world using Oculus headset, in this world the user will be able to discover a new way of using the social networks. The user will be experimenting a unique experience in a virtual 3D world divided into several sections, each section represents one of the Facebook services (friend list, pages, groups…) with the ability to perform all kind of actions related to the service (chat, comment, share, like…)

The project is mainly based on audiovisual aids techniques due to several indications within the environment allowing the user to establish written and movement interactions.

“Let’s Meet” has been conceived and developed with a major optimization reducing its weight in terms of increasing the response time so that the user won’t be waiting for so long in order to remain faithful to the virtue of social networks. To meet the needs of user comfort we have designed an environment with high ergonomics guiding the user with clear and simple indication so that he can easily fulfill any interaction.

Knowing that Facebook has bought the oculus Rift headset and considering his notoriety within worldwide society by scoring 1,750 millions of active users. We consider that ‘Let’s meet’ may have major impact and the scope to reach over millions.

As all PIM projects we are mentoring ( acronym for "Projet d'integration mobile"), this project is made by 4 engineering students coached by one mentor. We are 10 Associate professors/R&D engineers mentoring 120 engineering students per year.

Video

Medium bace2hf4

Gunasekaran S. updated status

Medium bace2hf4

Gunasekaran Sengodan

I'm learning and trying to develop my own idea to solve lot of real world problems existing in IOT space(This may be for a Person, Business, NGO etc...). I'm looking to aware what best practices available already to share up my idea and collaborate like minded engineers to discuss various ideas to brings it next level as every day progress.

Medium default profile

donel a. updated status

Medium default profile

donel adams

Google or Bing name to view associate networks fostering business development by maximizing any platform potential for growth utilizing AI

Medium me

Dhruv R. created project Business Card Scanner

Medium ff988786 c022 452e 8eb5 84843500d34d

Business Card Scanner

Here’s how it works:

It uses Google’s Tesseract to built an Optical Character Recognition Engine inside the phone once installed, thus it works completely offline.

Then using Leptonica Image Processing and various other algorithms the image clicked is enhanced so as to best suite for the OCR purpose.

The engine then extracts the text which undergoes entity detection using Open Natural Language Processing(OpenNLP).

The entities are put under appropriate fields and the contact is saved in the phone directory along with the business card.

Using Parse as backend, Android Studio as IDE, stackoverflow as mentor I finally completed the app in 1 month time period.

Medium me

Dhruv R. updated status

Medium me

Dhruv Rathi

Fishes classification from under water is a huge problem for fishermen and survey guides, organisers and government itself. I am working on a new algorithm to classify fishes based on textures from underwater images into the corresponding species.

Default user avatar 57012e2942

Suraj R. created project Vehicle Detection

Medium fa78039c c2a6 4cc8 b5d2 a4647162b92a

Vehicle Detection

Detecting vehicles in a video stream is an object detection problem. An object detection problem can be approached as either a classification problem or a regression problem. In the classification approach, the image are divided into small patches, each of which will be run through a classifier to determine whether there are objects in the patch. The bounding boxes will be assigned to patches with positive classification results. In the regression approach, the whole image will be run through a convolutional neural network directly to generate one or more bounding boxes for objects in the images.

The goal of this project is to detect the vehicles in a camera video. The You Only Look Once (YOLO) algorithm is used here to detect the vehicles from a dash camera video stream. This feature is an extremely important breakthrough for self-driving cars as we can train the model to also recognize birds, people, stop signs, signals and much more.

In this project, we will implement the version 1 of tiny-YOLO in Keras, since it’s easy to implement and is reasonably fast.

The YOLO approach of the object detection is consists of two parts: the neural network part that predicts a vector from an image, and the postprocessing part that interpolates the vector as boxes coordinates and class probabilities. For the neural network, the tiny YOLO v1 is consist of 9 convolution layers and 3 full connected layers. Each convolution layer consists of convolution, leaky relu and max pooling operations. The output of this network is a 1470 vector, which contains the information for the predicted bounding boxes. The 1470 vector output is divided into three parts, giving the probability, confidence and box coordinates.

See More

No users to show at the moment.

No users to show at the moment.