Suppose you have a room full of scientists—hundreds of 'em—and want to find out how they actually use computers in their work. There isn't time to interview them individually, or to record their desktops during a typical working week, so you've decided to ask them to self-assess their understanding of some key terms on a scale of:
No idea what it is.
Use it/have used it infrequently.
Use it regularly.
Couldn't get through the day without it.
My list is below; what have I forgotten, and (more importantly) how would you criticize this assessment method?
A command-line shell
Shell scripts
Version control system (e.g., CVS, Subversion)
Bug tracker
Build system (e.g., Make, Ant)
Debugger (e.g., GDB)
Integrated Development Environment (e.g., Eclipse, Visual Studio)
WYSIWYG document formatting (e.g., Word, PowerPoint, OpenOffice)
Now, you have the same room full of scientists, and you want to find out how much they know about software development. There still isn't time to interview them or have them solve some programming problems, so again you're falling back on self-assessment. This time, the scale is:
No idea what it means.
Have heard the term but couldn't explain it.
Could explain it correctly to a junior colleague.
Expert-level understanding.
and the terms themselves are:
Nested loop
Switch statement
Stable sort
Depth-first traversal
Polymorphism
Singleton
Regular expression
Inner join
Version control
Branch and merge
Unit test
Variant digression
Build and smoke test
Code coverage
Breakpoint
Defensive programming
Test-driven development
Release manifest
Agile development
UML
Traceability matrix
User story
Once again, my questions are (a) what have I forgotten, and (b) how "fair" is this as an assessment method?
Originally posted 2008-07-23 by Greg Wilson in Content, Research.