top of page

Week 3:
Webcam Processing

Friday 18th February 2022

WEEK 3 -

TO DO: Webcam Processing Research​

 

PURPOSE FOR RESEARCH

After yesterday's discussion with James, we talked through the restrictions with our set up and the possibilities for interaction and methods of input. After an hour long discussion, James suggested a new method of input being Webcam Processing. We talked through our choice for Touch Controllers and discovered that we can achieve the same effect using motion, to fulfil our intended experience. This is something that Kiera and I were interested to learn more about and potentially use this as our input method, rather than providing players with controllers in the space.

​

Today, I will be researching into Webcam Processing to gather a greater understanding around the topic, and evaluate how we can use this to fulfil our player experience in-game.

​

WHAT WILL I BE DOING?

I will be preparing a few research questions to get me started and begin a deep dive into the topic of Webcam, Image Processing and Pose Prediction. This will provide me with what I need to know about how it works and how exactly players will be interacting in-game.

​

GOALS FOR THE DAY​

My main goal for today is to gather research and a greater understanding around the topic 'Webcam Processing' and document my findings. Video examples and note taking.

​

RESEARCH PLAN/QUESTIONS

​

  1. What is Webcam/Image Processing?/How to set this up?

  2. Examples of it in use.

  3. How can we use it? How can we apply this to our own game development?

TASK 1

​

RESEARCH: What is Webcam/Image Processing?​

UNDERSTANDING WEBCAM PROCESSING

What do I know already?​

From talking with James and briefly touching upon the topic, I know that you can use webcam/image processing and/or 'Pose Prediction' to translate physical movement i.e gesture/motion into the digital world, through your webcam. Essentially, this'll allow players to interact with the digital environment through simple gestures, picked up by their laptop/device webcam. We learned that it's possible to identify a body part as a method of input, therefore, allowing people to use an allocated body part to control something on screen. For our game, this would be a great way to encourage slow body movement and regulation of more physical symptoms of anxiety and stress, if the interaction itself does this. As our gameplay involves the collection of musical items through a landscape, we can design this in a way that influences the player to move slowly. The 'character' or element that players will be controlling in this case will be a bird which is already associated to gentle and elegant movement which should influence players to move this way, however, we can also think about how to encourage players to move with the beat of the music etc. In essence, using this form of input would be a great way to control both the mind and body, and provide players with the appropriate tools for emotional and physical regulation.

HOW CAN IT BE USED?

"Creating Webcam Effects with Processing"​

NOTES

  • Need to download Processing IDE and a webcam!

  • Learning webcam manipulation

  • "Processing is a powerful tool which allows the creation of art through code"

  • Processing is capable of "manipulating live video"

  • Gives you the ability to flip live video, resize it, change colours and use motion to control/follow the mouse cursor

  • Can detect users mouse

  • Detect motions to manipulate what's shown on screen - can be used to change colours, adjust sizes/scale etc.

simple_webcam_show_processing.png

REFLECTION

It's interesting how you can use processing to control what's displayed and I like the idea of manipulation. This could be turned into a game-like feature, by thinking about players will have agency over their environment and how they'll do this. From what I have seen from this short demo, I already like the idea of hand-tracking because I think it can be a great way to influence slow body movement, like I mentioned before - maybe if players were in control of the bird using motion. This could be using their bodies or a specific body part.

processing_four_color.png
processing_webcam_upside_down.png

HOW CAN WE INCORPORATE GESTURE INTO PROCESSING?

"Gesture Interface using Webcam"​

REFLECTION

Combined with my previous findings, this has made me more enthusiastic towards the idea of hand/body tracking and giving players the opportunity to be in control of the bird or an object through this concept of webcam processing. I also liked the idea of 'hand-painting' which I have attached below, It could be used to reveal colour within the environment or to simply leave your own personal touch within the space/on screen.

​

From the video demonstrations, I saw that he used different parts of his body to do different things. I liked that he used his whole body to move a platform across the screen to collect objects - a fun way to incorporate full body into gameplay. However, I do prefer the idea of a single body part to reduce how much is required from players and to not expect too much from them.

NOTES

  • Sense virtuality

  • Gesture computing

  • Mouse control - use gesture to control object on screen, tracks hands and their position through webcam.

  • Gesture Painting - use gesture to paint colours, different shapes and different patterns on screen

  • Create virtual boxes - change colours and select different boxes

  • Target Following - Object to follow direction of hand movement (or any body part!)

  • Body movement - use body to control object across screen, left and right. (E.g. platformer)

Hand-tracking

Screenshot 2022-04-17 at 16.22.13.png
Screenshot 2022-04-17 at 16.22.50.png
Screenshot 2022-04-17 at 16.22.32.png

Hand-painting​

EVALUATION

I love the idea of gesture control and body recognition and I think it could be much more effective than using our original input method being the Oculus 2 Touch Controller. Considering our set up and why we are making this space, it would make more sense to allow players to move freely without the need for additional equipment. To move forward with this, I'd like to document a little on how it works to fully understand the process of setting something like this up. Although I am not focused on the coding sides of things, I'd still like to look into it.

HOW DOES IT WORK? WHERE DO WE BEGIN?

"Hand Gesture Recognition"​

​

Tutorial by Sadaival Singh.

EXERCISE

Creating a program that identify hand gestures in real time using webcam.

​

HOW?

The program will identify a users hand "within a given space on screen" and determine the number of fingers they are holding up. This will then result in a chosen output.

​

  1. Select boundary of input image (scans the presence of a human hand)

  2. Create a mask - select pixels that match a specified colour range

  3. Blur the mask image to fill in data

  4. Draw a 'contour' of the hand, identify the individual fingers.

​

NOTES

  • Webcam can identify the number of fingers players hold up and translates this into a chosen output.

  • In this case, one thumbs up = 'Good Job' which is then displayed on screen.

  • I've learned from this that you can essentially use gesture to create your own decided output, whether that's something displayed on screen or an object attached to the moving part.

  • This allows users to be in control of something, provides them with the freedom to move however/wherever they want.

1_OtZOD5yPbIJ1IW3-AohTyw.png

TASK 2

​

RESEARCH: Examples of it in use​

I struggled to find a lot of information around the topic, especially anything recent! Most of what I found above is from years ago, hence the old style desktop screens. I found a few examples demonstrating how webcam processing works and how it can be used, but I'd like to document the different things you can do with it, such as change filters and styles etc. While Kiera and I wait for James to create a demo for us to further explore this topic, becoming familiar with the program will be useful to be able to push ourselves when it comes to how else we might use this process of interaction.

Hand Gesture Recognition with Webcam​

10.5923.j.ajis.20170703.11_016.png
10.5923.j.ajis.20170703.11_013.png
10.5923.j.ajis.20170703.11_017-1.png
10.5923.j.ajis.20170703.11_019.png

NOTES

  • Ability to use specific gestures with different allocated outputs.

  • Identify specific inputs and outputs.

  • Different gestures do different things/have different 'functions'.

Gesture Controlled Video-Game

NOTES

  • Ability to control any object using specific body part or motion.

  • Control objects in-game to do specific things.

  • Gesture - in-game mechanics.

  • Translate physical body movement into game mechanics and interactions.

  • In this scenario, he is in control of the placement of the car using just his index finger. (Left or right)

v1.gif

Processing Visuals with Webcam

maxresdefault.jpg

NOTES

  • Ability to apply different filters to webcam/screen

  • Styles

  • Colours

  • Tints

Handtrack.js - Gesture Control Interaction Demo

Using the HandTrack.js program online, I was able to test the use of webcam/image processing myself. This site consists of the scripts used to achieve this interaction which might be useful to look at in the future.

​

The second link demonstrates how three lines of code and 'Tensorflow.js' can be used to enable hand-tracking interactions. I found this extremely interesting and could influence the way we go about our own program.

TASK 3

​

REFLECTION & EVALUATION: How can we use Webcam/Image Processing?​

Screenshot 2022-04-23 at 21.16.44.png
Screenshot 2022-03-25 at 13.11.05.png

WHAT DID I DO?

I was able to use my webcam to control the platform in this basic game, where I had to move the platform to interact with the falling dots and get points from them. The program identified my face as the input method and I was able to control the position of the platform with my physical movement.

​

​

​

​

​

​

​

​

​

​

​

REFLECTION

I enjoyed using this program although it was very simple gameplay, I still enjoyed the interaction and was curious as to how it worked. I find this interaction type much more appealing and interesting than using a traditional gamepad or controller, and I am much happier with this choice rather than using Touch Controllers. This input will spark interest and curiosity in players as well as allow them to interact through slow physical body movement. I found myself playing for around 10 minutes, I became immersed in the interaction although gameplay wasn't too interesting or rewarding in any way!

WHAT HAVE I LEARNED?

Today's research has taught me a lot about the topic that I wasn't aware of already. I learned that there is a lot more we can do with this process than I first imagined. Although, there was one that stuck out to me the most. The idea of using hand-tracking to interact in game fascinates me the most! I think processing different visuals using webcam is interesting but not relevant enough for our project, and I feel something more to do with motion and body movement would be more suitable. I learned that you can essentially program any part of the body to perform any specific action/output through webcam processing, meaning our possibilities are limitless.

BENEFITS

The benefits of using this process in relation to our project involves the use of physical movement translated into the digital world. We had the idea of using Touch Controllers to do a similar thing, but this requires no additional equipment whatsoever. Using webcam processing/gesture, players will be able to enter the space freely and interact through simple body movement, not requiring too much thought or set up. As well as this, the whole purpose of our project is to encourage emotional and physical regulation in those struggling with symptoms of anxiety and stress, and providing them with a tool to move slowly through rhythmic movement will help exceptionally with this - this is why this form of interaction could be successful if executed properly.

DRAWBACKS/LIMITATIONS

The only clear limitation of this process is the programming and set up behind it. Thankfully, we have been offered help by our previous tutor, James Stallwood, who has expressed his enthusiasm towards the project and would love to take it further. The only problem with this process is, Kiera and I will have less involvement with the coding sides of things as we are not as confident with motion/gesture control. In addition to this, the set up for our space will be a little different, and will require webcams embedded into each screen to enable players to interact. This'll make it much harder to establish a suitable set up but if it fulfils our intended experience then it's something we must consider.

HOW WOULD WE APPLY THIS TO GAME DEVELOPMENT?

​

  • How would we use this in our current project?

  • Replace Touch Controllers?

  • The set up of the space?

NEW METHOD OF INPUT

​

The process of hand recognition would be used to replace the need for Touch Controllers and require no physical controller or gamepad from players at all. This form of input will essentially do exactly what the Touch Controller would have done, which is to control the bird. Players will face one of four screens at a time and use a specific gesture to control the position of the bird and the direction in which it flies in. They will have to be parallel to the position of the webcam, which will be placed somewhere on top of or in the wall. This webcam will then identify their hand (or body part of our choice) and translate its movement into the digital environment, therefore, moving the bird to collect the musical items.

​

REFERRING BACK TO OUR PROJECT GOALS

​

I believe using hand-recognition will be much more effective than using any form of controller. It sparks interest and curiosity as well as being playful and beneficial to players. Its benefit comes from the ability to move freely in the space, alongside soothing piano music, which on its own should encourage slow body movement. This will result in relaxation and calm players. It fulfils our intended experience by allowing players to have control over their environment, this time by being in control of both their mind and body - this is something that we can sometimes feel a lack of during moments of distress.

​

CHANGES WE HAVE TO MAKE

​

Kiera and I have been in discussion and feel its best to progress with this idea, to enhance gameplay and aim bigger for this project. We know we must organise time with James at least weekly to speak through the project and how we go about programming this method of interaction for the game, which we have already scheduled for next week. Over the next few weeks, we should gather a greater understanding around how we tie this together with our gameplay idea and how we go about getting our first game version in development! For now, we have decided to put the box construction on hold and rethink the set up of the space as we're aware the use of webcams requires a more extensive set up than anticipated.

 

NEXT STEPS

For now, Kiera and I will finish the week off with our set tasks and meet again with James next week to discuss further steps with webcam processing. Today was a way to think the option through and we have both agreed we prefer this to our initial idea. My task for the rest of the week is to focus on creating concepts and mockups for the space as I recently gathered more insight into the theme of our game. As Art Lead, I feel it would be best to begin early and have concepts prepared for the day I create the Art Bible and define the look of the game. This should take place around Week 5. This week, Kiera should also be working on prototyping the main interaction in a Virtual Reality prototype, so we have an alternative playable experience while we establish the set up for our main idea.

​

Work for the remainder of the week can be found here:
Concepts & Mockups

Prototyping

​

  • Instagram
  • Twitter
  • LinkedIn
bottom of page