Today we had our first testing session with people from the class and a couple of people from outside the project. Before we started I asked some people I know if they wanted to test, so that we had more testers than only people from within the class.
As for preperation we wrote some things down that we wanted to know from our testers. These things were:
- What people felt when they looked around;
- How they felt as a person within the world;
- Whether they felt big or small in comparison to the world;
- Do the hand gestures react well based on what they want to achieve;
- Is it clear what happens based on what they do.
So, the main focus of this test were two things: whether the environment has a “fashion feel”. This is important because the client want the experience to feel like their brand (values). The second thing was: is handtracking easy to use (for our target group), this one is important because we want to use hand tracking since it is more natural and intuitive feeling, BUT if it does not work well it makes people even more frustrated.
From our testing goals we made the following questions:
- What is the feeling you get when you look around you?
- How do you feel within the world?
- Do you feel too small in comparison to the world?
- Do the hand gestures react well on what you want to do?
- Is it clear what happens based on what you do?
- Is the “fashion feel” clearly recognizable in the environment?
- What is your standard hand position? (To know how people hold their hands when doing nothing).
We let the people play the game and have them tell us things while playing if they wanted and noted that in a document. Afterwards we asked them all the questions noted above. You can read the document after the next paragraph.
In conclusion we got good answers to our questions. We learned that the world needs a lot more fine-tuning before it has the right feel and I am of the opinion that it will also help when we install lighting and particles in the environment the world will feel more cohesive and finished. We also learned that the hand tracking works, BUT the movement should not be done via hand tracking, because people kept moving without wanting it and we now know that the gestures are not precise enough to use for the character creation. This is also because the gesture detector sometimes recognizes gestures while a person is transitioning between gestures (or doing nothing) and this makes it that for instance the scaling of the character resets while the player did not want to do that.
We are now looking into AutoML for the character creation, because Lili gave us a class about it and there we learned that you can have the code recognize hand movements, which will be more precise and very useful for us.