--help
, -h
or -?
will produce a help screen which shows the allowed/required
arguments for this module. This information is also included
in the HTML docu pages for the modules.The arguments can be classifiead as:
-a -b
or -ab
-
' followed by a letter, multiple short options
can be merged to one argument
--help
--
', followed by a word
-s 4
or -w liste1.txt
--bar/--nobar
--nodebug/--debug
--noquiet/--quiet
--nolog/--log
---------------------------------------------------------------- | | | 1. Acquisition | | (Device specific, EEP has nothing to do with it) | | | |----------------------------------------------------------------| | | | 2. Archiving | | (cleanup/convert/backup recorded data) | | | |----------------------------------------------------------------| | | | | 3.1 Trial Classification | 3.2 Signal "Improvement" | | (mark incorrectly answered | (remove unwanted components | | trials, mark trials with | from the signal) | | disturbed signal...) | | | | | | | | |----------------------------------------------------------------| | | | 4. Single Subject Averaging | | (calc the ERP's from the "good" trials using the "good" signal)| | | |----------------------------------------------------------------| | | | 5. ERP Processing | | (produce your plots, statistical models etc. | | - or decide to go back to step 1. or 3. and do again...) | | | ----------------------------------------------------------------Performing these steps leaves you with a couple of files in the so-called "EEP project tree". The user is responsible for keeping his project tree in a consistent state. EEP only supports this with it's naming and directory conventions. The general hints below might also be helpful.
A typical EEP project tree could look like this:
think/cfg/average.cfg - EEP configuration files think/cfg/detrend.cfg ... think/sh/eval.sh - scripts, tools ... think/sh/do_what_i_want.sh think/sh/trgpatch.awk ... think/vp01/vp01.cnt - data record of one subject think/vp01/vp01.trg (signal data, classification data...) think/vp01/vp01.rej think/vp01/vp01.res ... think/vp01/avr1/vp01cr.avr - several ERP-datasets think/vp01/avr1/vp01cf.avr (each avr-file contains the ERP of ... one subject in one condition) think/vp01/avr2/vp01cr.avr think/vp01/avr2/vp01cf.avr ----- ---- -- | | | | | ------- condition shortcut (2 characters recommended) | | | -------------------- subject code (4 characters recommended) | ------------------------- project codename
back to scheme
2. Archiving
After acquisition you have to create "clean signal archives" from recorded data.
Typically, you will use the conversion and
management modules here. The goal is to
produce well-aligned datasets for all subjects in the study (equal channel
number/labels, equal reference etc.).
Some of the required tools are ported to MS-DOS (description in German) to allow a local format conversion at MS-DOS driven acquisition machines.
back to scheme
3.1 Trial Classification
Before calculating an ERP you have to decide which set of
trials should be averaged to form this ERP.
This requires a response dependent trial selection (trged
or your own
trigger file manipulation script)
to mark incorrectly answered trials and a signal dependent trial selection
to mark the disturbed trials(automatically via cntreject
,
cntreject_t
or manually via xcnt
, xeog
).
None of the modules mentioned above changes signal data. All what they do is to produce one or more classification lists (.trg, .rej, .cls) which are used in the averaging step to select the good trials.
Note that the trial classification interacts with the signal processing steps. A good filter can repair disturbed trials or the classification results show that the signal must somehow be filtered...
back to scheme
3.2 Signal "Improvement"
The original signal record is often too noisy or otherwise disturbed.
EEP 3.0 offers filter programs which allows to compensate/remove such artifacts
(cntfilter
, cntdetrend
).
back to scheme
4. Single Subject Averaging
This step (cntaverage
) uses the signal and all the classification lists
to calculate the ERP's by averaging the signal of all good trials.
You can treat this step as an important fixpoint in each evaluation.
All preceding steps can be seen as preparations to
provide cntaverage
with the information it needs to perform its
job successfully.
back to scheme
5. ERP processing
Each ERP (stored as .avr file) is basically a simple channel * time
data matrix for one subject in one condition.
EEP offers several modules to
combine/process/view/plot
the individual ERP matrices and the study which they form together,
but it doesn't claim to cover all possible evaluations one can think of.
There is always the option and often the need to
convert the .avr files
and to proceed with your preferred computation tools.
back to scheme
General Hints
Time axis manipulations in the cnt-files
(cntcat
, cntepoch
, cntdown
),
should be performed at first. The extern control data files(.trg, .rej, .cls)
become invalid during such processing steps and you would have to recreate
all this at great expense.
Each EEG/MEG record consists normally of at least 3 files: signal data (.cnt), triggers (.trg) and rejections (.rej). These three files should have the same basename(the part before the dot) after the first "standard preprocessing". The EEP modules can automatically find the correct triggers and rejections from the cnt filename this way and interactive work at command line level becomes much easier.
In shellscripts for automated evaluation, supply all filenames in the command line. Do not rely on the automatic filename rules in this case.
Consider also such tools as "make". It allows you to store and access frequently used shell commands in a convenient way. See the example Makefile for more details.
I suggest to remove all write permissions from data files which cannot be reproduced by automatic scripts. This prevents you from accidental overwriting valuable files. Note that most Unix programs, including the EEP modules, do not ask before overwriting existing files.
Altough many of the EEP 3.x files are compatible with EEP 2.0, you should be careful when mixing the DOS and Unix modules. Make sure that each textfile has a CR-LF newline sequence before using it with DOS, make sure that files created in Unix do not violate the DOS filename limitations and make sure that the file permissions/ownership allow to access files created in Unix from DOS and vice versa.
EEP modules write a protocol of the current run to the standard output channel. You should save this output in logfiles.
At any time, the central project directory (normally at a server) should contain a clean, current project state. You should NOT store any intermediate result but you should bring all manual work (configuration files, scripts, manual trial classifications) from local drives back to the server.
Before you start evaluation processes - check whether you can save network load (reach a faster execution for yourself and for others) with a working copy of the input data in a local filesystem of the evaluation machine. This can speed up your work dramatically, especially with the interactive viewers and when you are performing many operations on the same dataset.