SPIDER-MAN: HOMECOMING - Virtual Reality Experience

SPIDER-MAN: HOMECOMING - Virtual Reality Experience

Experience VR through the eyes of Spider-Man. Try it in select @Cinemark Theaters across the country and see #SpiderManHomecoming in theater

Virtual Reality, Game Development

  • 0 Collaborators




Sony Pictures Virtual Reality announced a new VR experience for Columbia Pictures’  upcoming “Spider-Man: Homecoming” flick Friday that will let players experience how it feels like to be Spidey. They’ll be able to do some target practice with Spider-Man’s new web shooters, and sling themselves through the air to face off against Spider-Man’s arch-nemesis, The Vulture. The experience will be available for free across all major VR platforms, including PlayStation VR, Oculus Rift and HTC Vive on June 30, a week before the movie is scheduled to hit theaters.




Variety / Hollywood Article

Medium lulu

Luckeciano M. created project Deep Reinforcement Learning for Humanoid Robot Walking and Kicking

Medium 0a32f315 66cf 4736 8900 74b9c04c3484

Deep Reinforcement Learning for Humanoid Robot Walking and Kicking

In this project, we apply modern Deep RL algorithms for optimizing Walking Engine and Kicking in Soccer 3D Simulation. The parameter optimization of walking and kicking is a big challenge in Robocup Soccer 3D Simulation enviroment. Until last year, the walking and kicking parameters are optimized by CMA-ES algorithm (a variant of genetic algorithms). This algorithm does not scale for hundreds or thousands of parameters, and it's proved that the bigger the number of parameters, the better is kicking and walking. Now, the Soccer 3D community started to apply Neural Networks for model the kick engine, and the results are very impressive. Due this, we started a project for finding better architectures for model the kick and started to apply the same techniques in walking engine. We propose to learn the entire walking engine with these techniques, replacing the traditional Double Inverted Pendulum-based walking engine.

Medium default profile

Nikhil K. created project American Sign Language Translation using hand pose estimation

Medium 3f93b4e7 9974 43d3 a381 84bc1326a950

American Sign Language Translation using hand pose estimation

Sign language translation has been a long standing computer vision problem. Almost all the approaches so far require special hardware for effective results. In this approach, hand pose estimation will be used to extract the locations of key points of a hand and then they will be fed as inputs to the encoder in the sequence to sequence model. Using CTC, the relevant hand pose vectors can be matched with corresponding translations. For the entire model to work, a large dataset of sign language is needed.

Medium photo

Prabhat K. updated status

Medium photo

Prabhat Kumar

Currently working on to provide Integration solutions to the business in order to minimize their cost and maintenance with flexibility to accommodate changes as per business need.

Default user avatar 57012e2942

SAI SRI RAM N. updated status

Default user avatar 57012e2942


Hi guys,this is sriram pursuing B.tech 3rd year 2nd semester in vijayawada.I am presently working on a project which requires machine learning,convolutional neural networks,image processing,python,php and other languages (used for building an website).The main motto of my project includes detecting the objects from a given image.This is pretty hard project for me but more interesting one because this project seperates me from others.I have a team with one member.I asked my teammate to learn numpy,scipy and pandas libraries .I have already installed numpy,scipy , pandasand other libraries without installing anaconda.I am getting many confusions in this project.But finally i have got a better overview about it and my team is currently working on it.I surely believe that my team can complete the project.Finally i have completed some courses like machine learning and presently preparing numpy,scipy,pandas,convolutional neural networks.

Medium profile pic

Bishaw S. created project Visual data parsing for automated reasoning expert system

Medium c095353a d87e 4ba5 bc2a cba7ef363fb3

Visual data parsing for automated reasoning expert system

Extract the data available in the input image/visual and process it to find the close search result available for the related terms. when a input visual containing the sentence "where is Mt. Everest ?" is given the output will be "Nepal", when a input visual containing the sentence "how to get to pokhara from kathmandu" the output will be google maps result with route, when the input visual containing "Gautam Buddha" is given the output will be wikipedia result about Buddha. the system will be able to

See More

No users to show at the moment.

Bigger bob duffy 3d head avatar
  • Projects 4
  • Followers 129

Bob Duffy

Folsom, CA, USA

Bigger screenshot www.facebook.com 2016 03 31 16 28 35
  • Projects 0
  • Followers 67

Wendy Boswell

Program Manager, Intel Software Innovators

Portland, OR, USA