August 2015
STARTUP STORY

How We Created A Framework For Play

written by
Ken Madsen
Co-founder and CEO @ DXTR Labs

Technology has to positively impact the future of our children

When you design the aeroplane you are also designing the plane crash.

Play has always been nature’s way of teaching and technology has always paved new ways for play to evolve. As a designer and CEO of a company that will put a new device in the hands of thousands of children, I have to always keep in mind that when you design the aeroplane you are also designing the plane crash. It’s not a pleasant thought, but that’s how important product design is to us.

Once we started to let our youngest children play on smartphones and tablets, we might have been oblivious to the strong negative effects on children’s cognitive development it has. I don’t believe these effects were thought about when designing the first smartphones, but none the less this is a very real issue we have to take seriously care of today.

IoT

Technology is a natural evolution of play and we believe that smart play is the next step in this evolution. There is no doubt that our children are in for amazing (The Internet of) Things during the coming years!

Because every child deserves the best start in life, we are working hard to make playDXTR the most useful platform for future play and learning - not only as a healthy pure-play Smart Toy but also as a source of unparalleled insights into the development of our children, for you as a parent. This is the story of how playDXTR works and how we came up with the idea.

Think of baseball

Before we get going, imagine for a second - a baseball pitcher about to throw.

In his hands, he holds a perfectly engineered ball designed for the game many years ago, with a hardcore, various layers of yarn, covered in two strips of white cowhide, tightly stitched together. The ball is ready to be thrown and that is just what is going to happen.

Baseball Pitcher throwing ball. CCO – Public Domain

The Human Limitation

Imagine the catcher signalling instructions about the throw to the pitcher. He begins his throw. Everything is in slow-motion. As he moves to position and begins to accelerate the ball, we start to observe the motion and forces impacting the ball. We can observe the growing angular velocity, the acceleration the ball receives, the exit velocity of the ball when it leaves the hand, and the motion as it flies.

The coach sees the ball for half a second - the average time a ball flies before it’s hit. In his career he’s seen thousands upon thousands of throws and has developed a second nature for recognising a good ball from a bad ball, relying on his gut feeling, trajectory and path of the ball, and the sound of the hit if there is one.Because the human eye is only capable of, on average, 45-60 frames per second (trained fighter pilots have been observed to recognise a plane from one frame @ 220fps but that’s extreme) the coach can only catch 25-30 frames of the ball flying. Most likely, he will only perceive parts of the flight.

This one throw is a system with an observer; the coach, a provider of instructions; the catcher, a user; the pitcher, and a device; the ball. These are the stakeholders in this single interaction of throwing the ball and the goal is to throw a perfect ball demanded by the play. As such it’s in everyone's interest to know as much about that throw as possible. The coach wants to know how and what to train, the pitcher wants to throw the best ball possible, the catcher wants the instructions to be followed precisely, and the ball… Well, the ball wants to fly.

But in the end the coach is left with little information about the individual throw, the pitcher is left with little information about his throw, the catcher is left with little information about the throw. Unbelievably rich details are lost to human perception - yet it’s the details, the inches, that make the whole difference between winning and losing.

I want this level of detail about my child's development!

Inch by inch, play by play

Today it’s become obvious to use technological aids in top sports - slow motion cameras and computer systems that track every single ball thrown, the trajectory extrapolated, and the probability of a hit is calculated on the fly. This goes for basketball, baseball, football, any kind of sport you can imagine; the attention to the perfection of each detail is 2nd to none. There’s even a popular tv show digging into the subject. As a result, coaches make better-informed judgements and pitchers know exactly what part of a throw to practice.

My boy William playing a puzzle game on an iPad.

I want this level of detail about my child's development! Not because I want to pace him, not because I want to control everything, but because I want to make the absolute best and most informed decisions about my boy that I possibly can; and technology is allowing this today! Toys are the perfect vessel for play and learning, just like a baseball is for baseball. Today's toys and apps can’t give me that.

This is the story how it works and how we came up with our framework for play.

The Framework

We have spent a lot of time working on the connection between designing a tangible object, high-resolution motion sensors, agile Bluetooth Low Energy nodes, powerful battery packs, microscopic MCUs, and the amazing smart devices we surround ourselves with today.

The result is playDXTR.

playDXTR is what we call a Tactile User Interface. But what does it interface WITH? We created a framework for real life play. Technically the hardware will be open for integration with any digital framework you like. Child’s play is our first implementation.

playDXTR consists of intelligent and magnetic building blocks, called Kubits. Each Kubit is embedded with sensors that can communicate wirelessly with the other Kubits on a network. So, the Kubits let us know how they have moved and are moving both individually and in relation to each other.

We can record the motion of each Kubit allowing us to record the history of the motions and predict which may come next. This is the study of kinematics, dynamics, and statics, which together with energy makes up Mechanics.

But that’s only one side of the story. The other side of the story is WHY it’s moving.

The combination of what and why is the foundation of our framework for real and digital play

Combining Why & What

Why? Because children need to play; they’re born to play. They need to experience the world and manipulate objects and learn to figure things out on their own. Your job as a parent is to make the best possible decisions for your children - to keep them safe and ensure their optimal development. That is why we’ve created playDXTR.

Today we use multiple high-tech aids to help keep our children safe - baby monitors, GPS trackers, training wheels and so on - what is missing in this equation is a device that helps us monitor the development of our children, while it happens.

The combination of what and why is the foundation of our framework for real and digital play. Using the Kubits, our digital framework can monitor the problem solving, fine motor skills, and a whole range of other metrics of the child’s development, all while playing with playDXTR. We can estimate these skills by measuring and assessing interactions in the three-dimensional space - unhindered by a two-dimensional screen - and in relation to any given task or challenge presented on the screen, given in a natural gaming environment.

playDXTR is a unique Tangible User Interface with almost limitless possible applications. Problem-solving and fine motor skill measurements being just two measurable metrics.

Dynamics 101 – Let’s get nitty gritty

Beware! This will get pretty nerdy. You can skip to next part if you’d rather not go into equations and maths

The study and understanding of motion, kinematics, statics, and dynamics, is the basis and foundation for most engineering curriculum. When it comes to motion, one man literally wrote the rulebook; Newton.

LAW I A particle remains at rest or continues to move with uniform velocity (in a straight line with a constant speed) if there is no unbalanced force acting on it.

LAW II The acceleration of a particle is proportional to the resultant force acting on it and is the direction of the force.

LAW III The forces of action and reaction between interacting bodies are equal in magnitude, opposite in direction, and collinear.

Don’t worry about Newton writing about particles, as we can use this to simplify situations. In a very simplified way, these laws govern the relation between forces, mass and acceleration.

Skipping a few chapters of the textbooks takes us to the general equations of motion, looking at force, mass, and acceleration. Let’s see how this looks in a rigid body in three-dimensional space. We could look at every arbitrary object, but let’s make it a cube since that matches a Kubit.

Here we see a Free Body Diagram illustrating four arbitrary external forces acting on the body, with its mass centre in G. These forces can be a hand moving the cube, gravity, the cube hitting the floor. We don’t worry about that yet.

According to the force equation:

∑F = mā

We know that the sum of all external forces is equal to the mass m of the body times the acceleration ā of its mass centre G.

The moment equation is taken about the mass centre:

∑MG = ḢG

Shows us that the resultant moment about the mass centre of the external forces on the body equals the time rate change of the angular momentum of the body about the mass centre.

Since a general system of forces acting on a rigid body may be replaced by a resultant force applied to a chosen point (or call it particle → that’s why we love Newton's laws…) and a corresponding couple, we can replace the external forces by their equivalent force-couple system in which the resultant force acts through the mass center. In doing so we can illustrate the corresponding dynamic response of the body with this illustration.

This is only one part of the movement of the cube, and we can tell the 3D acceleration of the mass centre by the help of an accelerometer. We are also interested in how the cube is rotating, i.e., pitch, roll, and yaw.

Let’s turn down the number of equations, and focus more on the understanding from here.

If the cube rotates it will do so around its mass centre with, depending on the orientation of a gyroscope, the axes exiting the faces of the cube orthogonally. When the cube is rotated, we will observe the angular acceleration in each direction. The rotation can be illustrated like this:

Combining the accelerometer and gyroscope creates what is known as an Inertial Measurement Unit (IMU), which basically tells us exactly how a cube is moving in space every fraction of a second. This would be a 6-axis IMU, and by adding a Magnetometer (fancy word for a device that measures strength and direction of a magnetic field at a point in space, aka compass) we can reduce the amount of drift that the 6-axis sensor experiences. For those not counting; this would now be a 9-axis IMU.

A device like this is what is used in inertial navigation systems like an aircraft, watercraft, spacecraft, and even guided missiles. Less threatening, it is also used in more consumer faced products like gaming, visual effects, motion capturing, and gesture recognition. If you ever played with a Playstation® SIXAXIS controller, you know you can throw the controller, and the motion actually translates into an action in the game.

By now the IMU is a tried and tested type of sensor, and is incorporated into many devices these days. Imagine playing Mario now that your jolts with the controller could actually make Mario jump higher.

Super Mario jumping an endless stream of Gomba’s

We combine the IMU with our own proprietary way of recognising which cubes are connected to which; and how. This creates the basis for the framework through which you will be able to build cars, monsters, animals, letters, buildings, bridges, space crafts, missiles, and just about everything else you can imagine. Because each cube is connected, you will be able to instantly see your creation digitised on the tablet, in the virtual world.

Designing for fidgeting - Non-tech guys can join again here

A physical property of an object that most people don’t usually think about is called the affordance. The term has its roots in cognitive psychology all the way back to the early twentieth century and was later defined as:

“All action possibilities latent in the environment, objectively measurable and independent of the individual's ability to recognise them, but always in relation to agents and therefore dependent on their capabilities.” - James J. Gibson (1979), The Ecological Approach to Visual Perception.

Essentially, the term relates to how a human can interact with an object, based on the context of the interaction, the attributes of the object, and the experience of the user. This was popularised by Donald Norman, the godfather of Human-Computer Interaction and Interaction Design, in the book “The Design of Everyday Things”.

Obviously, if the chair was there, the relation is clear, that one would probably not sit on the table and work on the chair

A great example our former professor, Henry Larsen, would use in one of the first classes was this: The classroom had tables but no chairs. The objective theory of Gibson would say that we could sit on the tables, the floor, keep standing, leave the room. While true, Norman looks with more subtlety on the situation; what does the table afford without a chair, when it’s too low to stand at?

Obviously, if the chair was there, the relation is clear, that one would probably not sit on the table and work on the chair. What is the likelihood of us leaving the classroom because there are no chairs, versus just sitting on the table or keep standing? What is the context of the situation? Would the social situation allow us to sit on the table? Lastly, our personal experiences may influence the situation; one might have had another class with a professor carrying out the same little social experiment to align the thoughts of the students, and just go ahead and sit on the tables like there was nothing of it.

We can comfortably conclude, that the affordance of an object is a very powerful property to consider while designing a product. The affordance of an object is not just about a chair and a table, but really every object we as humans can interact with. As designers, we bring these properties into our design to guide a user experience. For instance, the way our magnets work and align, makes the Kubits align themselves to the correct position. Even without any visual cues to how you can combine them, the magnetic forces will guide you, which means the learning curve is incredibly low and children pick up on the framework almost immediately. Designing and building in this kind of guided use will allow the children to just create and play instead of first having to learn how to use something.

The size of an object will also carry with it information that is perceived and interpreted by a user about how engage with the object - can I pick it up with two fingers? Does it look heavy? do I need two hands? Does the material look like something I’d even like to touch? Is it soft and delicate? Is it sharp and dangerous? All these interpretations we do in the blink of an eye and we do them all the time.

Sometimes it goes wrong - I think we all know the feeling of lifting a milk carton thinking it’s full, only to discover that it’s almost empty and we’re about to throw it to the ceiling. My point is that the affordance of an object is something we usually don’t think about as user, which means designers have somewhat done a good job because if you don’t think about it that means the object is doing what it’s supposed to and working like intended. Just think of every time you have used something and been terribly annoyed about it not making sense, or doing what you want it to, and it’s overly complicated so you have a huge User Manual to make things worse.

A classic analogy of bad design is; think about old VCRs and how hard it would seem to change the setting of the time. A product like this has probably been designed more by engineers than by designers.

The famous quote by Mieke Gerritzen. Image Credit

Let me tell you how we found out what size to make our Kubits.The first question we had in our design research was to figure out what shape would prompt the most intrinsic interaction from users.

We handcrafted foam shapes of varying sizes, everything from a sphere, pyramid, and cube to asymmetrical shapes with N number faces. We’d put the shapes casually on a table with a chair, and ask participants to an interview about something completely unrelated to the shapes. We’d bring the participant into the room, and point him to the chair, and say that the interviewer would join him momentarily. Meanwhile, the interviewer would not show up, as this was a test to see what the first object would be the participant picked up and started to fidget with, and what he would do with it.

Remember that not a single line of code had been written at this stage, as we were deep in our design exploration process and the power of simple prototypes are that you can gain so much insight without making an elaborate product. Talk about Minimum Viable Product. Eventually, we settled on the cube form factor, and explored now first the size and then the amount.

To find the size we could read the literature, experiment, or just blindly guess. As engineers, we don’t like guessing. We dug up an amazingly detailed US Army report going over the anthropometry of the human hand, researching glove sizes and placement of buttons and switches in a cockpit. This gave us good grounds to aim for an object the size of one cubic inch, as this was well within normal thumb and index-finger grabbing range and nearing a size that requires fine motor skills which were important for us. Not too small not too big.

Design Anthropology

Next design question was the numbers. At what number of cubes do people generally start to play in various social settings, and what will happen when they do? Here our skills in Design Ethnography came into play.

We set up candid cameras around town and campus and placed various numbers of cubes on tables. Sometimes on many tables, sometimes on a few tables. We created literally endless hours of footage that we meticulously and methodologically analysed for objective data.

Low and behold, it turned out that above seventeen cubes people would start engaging with the cubes and start to create and building structures. Too many cubes would remove interactions, and the Goldilocks zone was between 17-35 cubes.

These prototypes were regular wooden blocks with no colour, no instructions, or influence whatsoever, other than the intrinsic motivation to start to fidget and play. We even observed people getting up from their table at a cafe to go over and ask strangers if they could have their cubes!

That’s when we knew we were on to something.

Join me in my next post, where I will move into how digitising real play is paving the way for this deep insight into our children’s development.

Ken Madsen

Kenneth is our CEO and contact point for everything DXTR Tactile. Ken is making sure that everything we do, we put our passion first; our curiosity for play and learning.

Engineer by training, entrepreneur by heart, father for life.

Secret power: Can stick random objects to his forehead.

YOU MIGHT ALSO LIKE

Back to All Blogs