Open In Colab   Open in Kaggle

Modeling Steps 3 - 4

By Neuromatch Academy

Content creators: Marius ‘t Hart, Megan Peters, Paul Schrater, Gunnar Blohm

Content reviewers: Eric DeWitt, Tara van Viegen, Marius Pachitariu

Production editors: Ella Batty, Spiros Chavlis


Step 3: Determining the basic ingredients

Video 4: Determining basic ingredients

Video 4: Determining basic ingredients

Video 4: Determining basic ingredients

Video 4: Determining basic ingredients
Video 4: Determining basic ingredients

Example projects step 3

Run this cell to download the data for this example project.

Example projects step 3

Run this cell to download the data for this example project.

Example projects step 3

Run this cell to download the data for this example project.

Example projects step 3

Run this cell to download the data for this example project.

Example projects step 3

Run this cell to download the data for this example project.

# @title Example projects step 3
from ipywidgets import widgets
from IPython.display import Markdown, Math

# @title Data retrieval
# @markdown Run this cell to download the data for this example project.
import io, requests
import numpy as np
from collections import Counter
r = requests.get('https://osf.io/mnqb7/download')
if r.status_code != 200:
  print('Failed to download data')
else:
  train_moves=np.load(io.BytesIO(r.content), allow_pickle=True)['train_moves']
  train_labels=np.load(io.BytesIO(r.content), allow_pickle=True)['train_labels']
  test_moves=np.load(io.BytesIO(r.content), allow_pickle=True)['test_moves']
  test_labels=np.load(io.BytesIO(r.content), allow_pickle=True)['test_labels']
  label_names=np.load(io.BytesIO(r.content), allow_pickle=True)['label_names']
  joint_names=np.load(io.BytesIO(r.content), allow_pickle=True)['joint_names']



markdown1 = r'''

## Step 3
<br>
<font size='3pt'>
We determined that we probably needed the following ingredients for our model:

* Vestibular input: *v(t)*

* Binary decision output: *d* - time dependent?

* Decision threshold: θ

* A filter (maybe running average?): *f*

* An integration mechanism to get from vestibular acceleration to sensed velocity: ∫

</font>
'''

markdown2 = '''
## Step 3
<br>
<font size='3pt'>
In order to address our question we need to design an appropriate computational data analysis pipeline. We did some brainstorming and think that we need to somehow extract the self-motion judgements from the spike counts of our neurons. Based on that, our algorithm needs to make a decision: was there self motion or not? This is a classical 2-choice classification problem. We will have to transform the raw spike data into the right input for the algorithm (spike pre-processing).

So we determined that we probably needed the following ingredients:

* spike trains *S* of 3-second trials (10ms spike bins)
* ground truth movement *m<sub>r</sub>* (real) and perceived movement *m<sub>p</sub>*
* some form of classifier *C* giving us a classification *c*
* spike pre-processing
</font>
'''

markdown31 = '''
## Step 3

<br>
<font size='3pt'>
After downloading the data, we should have 6 numpy arrays:

`train_moves`: the training set of 1032 movements

`train_labels`: the class labels for each of the 1032 training movements

`test_moves`: the test set of 172 movements

`test_labels`: the class labels for each of the 172 test movements

`label_names`: text labels for the values in the two arrays of class labels

`joint_names`: the names of the 24 joints used in each movement

<br>
We'll take a closer look at the data below. Note: data is split into training and test sets. If you don't know what that means, NMA-DL will teach you!
</font>
<br>
'''

markdown32 = '''
<br>
<font size='3pt'>

**Inputs:**
For simplicity, we take the first 24 joints of the whole MoVi dataset including all major limbs. The data was in an exponential map format, which has 3 rotations/angles for each joint (pitch, yaw, roll). The advantage of this type of data is that it is (mostly) agnostic about body size or shape. And since we care about movements only, we choose this representation of the data (there are other representations in the full data set).

Since the joints are simply points, the 3rd angle (i.e. roll) contained no information, and that is already dropped from the data that we pre-formatted for this demo project. That is, the movements of each joint are described by 2 angles, that change over time. Furthermore, we normalized all the angles/rotations to fall between 0 and 1 so they are good input for PyTorch.

Finally, the movements originally took various amounts of time, but we need the same input for each movement, so we sub-sampled and (linearly) interpolated the data to have 75 timepoints.

Our training data is supposed to have 1032 movements, 2 x 24 joints = 48 channels and 75 timepoints. Let's check and make sure:

</font>
<br>
'''

markdown33 = '''
<br>
<font size='3pt'>

**Joints:**
For each movement we have 2 angles from 24 joints. Which joints are these?
</font>
<br>
'''

markdown34 = '''
<br>
<font size='3pt'>

**Labels:**

Let's have a look at the `train_labels` array too:

</font>
<br>
'''


markdown35 = '''
<br>
<font size='3pt'>
The labels are numbers, and there are 1032 of them, so that matches the number of movements in the data set.
There are text versions too in the array called `label_names`. Let's have a look. There are supposed to be 14 movement classes.
</font>
<br>
'''

markdown36 = '''
<br>
<font size='3pt'>
The test data set has similar data, but fewer movements. That's ok.
What's important is that both the training and test datasets have an even spread of movement types,
i.e., we want them to be balanced. Let's see how balanced the data is:

Train data:

</font>
<br>
'''

markdown37 = '''
<br>
<font size='3pt'>
Test data:
</font>
<br>
'''

markdown38 = '''
<br>
<font size='3pt'>
So that looks more or less OK. Movements 2, 3, 4 and 5 occur once more in the
training data than the other movements, and one time fewer in the test data.
Not perfect, but probably doesn't matter that much.
</font>
<br>
'''

markdown39 = '''
<br>
<br>
<font size='3pt'>

**Model ingredients**

"Mechanisms":
<br>

Feature engineering? --> Do we need anything else aside from angular time courses? For now we choose to only use the angular time courses (exponential maps), as our ultimate goal is to see how many joints we need for accurate movement classification so that we can decrease the number of measurements or devices for later work.

Feature selection? --> Which joint movements are most informative? These are related to our research questions and hypotheses, so this project will explicitly investigate which joints are most informative.

Feature grouping? --> Instead of trying all possible combinations of joints (very many) we could focus on limbs, by grouping joints. We could also try the model on individual joints.

Classifier? --> For our classifier we would like to keep it as simple as possible, but we will decide later.

Input? --> The training data (movements and labels) will be used to train the classifier.

Output? --> The test data will be used as input for the trained model and we will see if the predicted labels are the same as the actual labels.

</font>
'''


# No idea why this is necessary but math doesn't render properly without it
display(Markdown(r""))


out2 = widgets.Output()
with out2:
  display(Markdown(markdown2))


out1 = widgets.Output()
with out1:
  display(Markdown(markdown1))

out3 = widgets.Output()
with out3:
  display(Markdown(markdown31))

  display(Markdown(markdown32))

  print(train_moves.shape)

  display(Markdown(markdown33))

  for joint_no in range(24):
    print(f"{joint_no}: {joint_names[joint_no]}")

  display(Markdown(markdown34))

  print(train_labels)
  print(train_labels.shape)

  display(Markdown(markdown35))
  # let's check the values of the train_labels array:
  label_numbers = np.unique(train_labels)
  print(label_numbers)

  # and use them as indices into the label_names array:
  for label_no in label_numbers:
    print(f"{label_no}: {label_names[label_no]}")

  display(Markdown(markdown36))

  Counter(train_labels)

  display(Markdown(markdown37))

  Counter(test_labels)

  display(Markdown(markdown38))

  display(Markdown(markdown39))

out = widgets.Tab([out1, out2, out3])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
out.set_title(2, 'Deep Learning')

display(out)

Determine your basic ingredients

This will allow you to think deeper about what your modeling project will need. It’s a crucial step before you can formulate hypotheses because you first need to understand what your modeling approach will need. There are 2 aspects you want to think about:


  1. What parameters / hyperparameters / variables are needed?

    • Constants?

    • Do they change over space, time, conditions…?

    • What details can be omitted?

    • Constraints, initial conditions?

    • Model inputs / outputs?

  2. Variables needed to describe the process to be modelled?

    • Brainstorming!

    • What can be observed / measured? latent variables?

    • Where do these variables come from?

    • Do any abstract concepts need to be instantiated as variables?

      • e.g., value, utility, uncertainty, cost, salience, goals, strategy, plant, dynamics

      • Instantiate them so that they relate to potential measurements!

This is a step where your prior knowledge and intuition is tested. You want to end up with an inventory of specific concepts and/or interactions that need to be instantiated.

Make sure to avoid the pitfalls!

Click here for a recap on pitfalls
  1. I’m experienced, I don’t need to think about ingredients anymore

  • Or so you think…

  1. I can’t think of any ingredients

  • Think about the potential experiment. What are your stimuli? What parameters? What would you control? What do you measure?

  1. I have all inputs and outputs

  • Good! But what will link them? Thinking about that will start shaping your model and hypotheses

  1. I can’t think of any links (= mechanisms)

  • You will acquire a library of potential mechanisms as you keep modeling and learning

  • But the literature will often give you hints through hypotheses

  • If you still can’t think of links, then maybe you’re missing ingredients?


Step 4: Formulating specific, mathematically defined hypotheses

Video 5: Formulating a hypothesis

Video 5: Formulating a hypothesis

Video 5: Formulating a hypothesis

Video 5: Formulating a hypothesis
Video 5: Formulating a hypothesis

Example projects step 4

Example projects step 4

Example projects step 4

Example projects step 4
Example projects step 4
# @title Example projects step 4
from ipywidgets import widgets
from IPython.display import Markdown
from IPython.display import Markdown, Math

# Not writing in latex because that didn't render in jupyterbook

markdown1 = r'''

## Step 4
<br>
<font size='3pt'>
Our main hypothesis is that the strength of the illusion has a linear relationship to the amplitude of vestibular noise.

Mathematically, this would write as

<div align="center">
<em>S</em> = <em>k</em> ⋅ <em>N</em>
</div>

where *S* is the illusion strength and *N* is the noise level, and *k* is a free parameter.
>we could simply use the frequency of occurance across repetitions as the "strength of the illusion"

We would get the noise as the standard deviation of *v(t)*, i.e.

<div align="center">
<em>N</em> = <b>E</b>[<em>v(t)</em><sup>2</sup>],
</div>

 where **E** stands for the expected value.

Do we need to take the average across time points?
> doesn't really matter because we have the generative process, so we can just use the σ that we define
</font>
'''

markdown2 = '''
## Step 4
<br>

<font size='3pt'>
We think that noise in the signal drives whether or not people perceive self motion. Maybe the brain uses the strongest signal at peak acceleration to decide on self motion, but we actually think it is better to accumulate evidence over some period of time. We want to test this. The noise idea also means that when the signal-to-noise ratio is higher, the brain does better, and this would be in the faster acceleration condition. We want to test this too.

We came up with the following hypotheses focussing on specific details of our overall research question:

* Hyp 1: Accumulated vestibular spike rates explain self-motion judgements better than average spike rates around peak acceleration.
* Hyp 2: Classification performance should be better for faster vs slower self-motion.

> There are many other hypotheses you could come up with, but for simplicity, let's go with those.

Mathematically, we can write our hypotheses as follows (using our above ingredients):
* Hyp 1: **E**(c<sub>accum</sub>) > **E**(c<sub>win</sub>)
* Hyp 2: **E**(c<sub>fast</sub>) > **E**(c<sub>slow</sub>)

Where **E** denotes taking the expected value (in this case the mean) of its argument: classification outcome in a given trial type.
</font>
'''

markdown3 = '''
## Step 4
<br>

<font size='3pt'>
Since humans can easily distinguish different movement types from video data and also more abstract "stick figures", a DL model should also be able to do so. Therefore, our hypotheses are more detailed with respect to parameters influencing model performance (and not just whether it will work or not).

Remember, we're interested in seeing how many joints are needed for classification. So we could hypothezise (Hypothesis 1) that arm and leg motions are sufficient for classification (meaning: head and torso data is not needed).

* Hypothesis 1: The performance of a model with four limbs plus torso and head is not higher than the performance of a model with only limbs.

We could also hypothesize that data from only one side of the body is sufficient (Hypothesis 2), e.g. the right side, since our participants are right handed.

* Hypothesis 2: A model using only joints in the right arm will outperform a model using only the joints in the left arm.

Writing those in mathematical terms:
* Hyp 1: **E**(perf <sub>limbs</sub>) > **E**(perf <sub>torso</sub>)
* Hyp 2: **E**(perf <sub>right arm</sub>) > **E**(perf <sub>left arm</sub>)
</font>
'''

# No idea why this is necessary but math doesn't render properly without it
display(Markdown(r""))

out2 = widgets.Output()
with out2:
  display(Markdown(markdown2))

out1 = widgets.Output()
with out1:
  display(Markdown(markdown1))

out3 = widgets.Output()
with out3:
  display(Markdown(markdown3))

out = widgets.Tab([out1, out2, out3])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
out.set_title(2, 'Deep Learning')

display(out)

Formulating your hypothesis

Once you have your question and goal lines up, you have done a literature review and you have thought about ingredients needed for your model, you’re now ready to start thinking about specific hypotheses.

Formulating hypotheses really consists of two consecutive steps:

  1. You think about the hypotheses in words by relating ingredients identified in Step 3

  • What is the model mechanism expected to do?

  • How are different parameters expected to influence model results?

  1. You then express these hypotheses in mathematical language by giving the ingredients identified in Step 3 specific variable names.

  • Be explicit, e.g., \(y(t)=f(x(t), k)\) but \(z(t)\) doesn’t influence \(y\)

There are also “structural hypotheses” that make assumptions on what model components you hypothesize will be crucial to capture the phenomenon at hand.

Important: Formulating the hypotheses is the last step before starting to model. This step determines the model approach and ingredients. It provides a more detailed description of the question / goal from Step 1. The more precise the hypotheses, the easier the model will be to justify.

Make sure to avoid the pitfalls!

Click here for a recap on pitfalls
  1. I don’t need hypotheses, I will just play around with the model

  • Hypotheses help determine and specify goals. You can (and should) still play…

  1. My hypotheses don’t match my question (or vice versa)

  • This is a normal part of the process!

  • You need to loop back to Step 1 and revisit your question / phenomenon / goals

  1. I can’t write down a math hypothesis

  • Often that means you lack ingredients and/or clarity on the hypothesis

  • OR: you have a “structural” hypothesis, i.e., you expect a certain model component to be crucial in explaining the phenomenon / answering the question