Why You Still Need To Know Things

11 minute read

It’s hard to learn things, and it’s even harder to remember them after you learn them if you’re not using them constantly. Humanity has had this problem since the beginning of time. We’ve made a few strides toward improving this state of affairs. We invented mnemonics, which made it easier to create and hold onto those memories; we invented writing, which let us get the information out of our head, share it with others, and retrieve it again when we forgot it; and then we invented the Internet, which let us dispense with the need to know things altogether because we can look them up in seconds anytime. …Right?

That seems to be the way all our educational materials are going. I took several physics courses in high school and college, and in all of them we no longer needed to memorize any of the formulas of Newtonian physics; anytime we had an exam, we got a card with all the formulas on it and we only had to figure out how to apply them to the problem. Now I’m not saying that’s easy at all, especially if you haven’t done much physics before (there’s a reason why so many books on “solving word problems” have been written for desperate algebra students, and these get considerably more complicated than the first-year algebra ones). The critical analysis and problem-solving skills that physics promotes are valuable and widely applicable to other disciplines, and I can see why educators prefer students to focus on those rather than worrying about the formulas.

But there’s a way in which those pesky formulas to be memorized are Newtonian physics. If I’m bored and looking at the world around me and have a question about it, those formulas are what I need to answer the question. If I’m on a road trip and I want to dash off a quick estimate of how many minutes it’s going to be until the next rest area without fiddling with my phone while I’m driving, I need to know that distance equals rate times time; my analysis skills aren’t going to help much. If I know the formula, I have a crystal-clear understanding of the relationship between those three things and answering the question is a cinch. Formulas also tell me what pitches I’m going to get when I touch different places on my violin strings and why they have different tone qualities, just how much more wear a truck puts on the highway than a motorcycle, and whether I can plug my hair dryer into my kitchen outlet while the computer is running without blowing out a fuse (yes, my apartment still uses actual thermal fuses). If you took physics but you don’t know any of these formulas even a little bit, I would argue that you don’t understand the most fundamental part of physics.

Okay, so maybe I’m just being a grumpy old twenty-something; we used to have to memorize ten new formulas every day on the way to school uphill both ways, and how dare these young folks have it any easier. Can’t we just look them up?

Let’s consider two problems with the “just look it up” school of thought. One of the problems, the one that’s usually brought up in discussions about memorization, is that it’s more convenient to know things. The driving and electricity examples I gave above are examples of the value of convenience. As long as I know that there is an equation that relates distance to time to speed or wattage to the current capacity of a circuit, I could go look it up when I want to run the calculations. I’m much less likely to actually do it, but if I don’t need or want to do that often, maybe it’s a worthwhile tradeoff for saving myself some trouble memorizing. If I use those formulas all the time, then I would probably do well to learn them by heart, just as most students still memorize their multiplication tables because it’s a real bother to do any substantial math without them, even in an age of calculators.

But there’s a much bigger problem: it’s impossible to do creative things with information we don’t know. This is one of the few tasks that people still (generally) outperform computers at: recognizing patterns and making creative connections between things. When the conditions are right, we can almost effortlessly produce new ideas and realizations that a preprogrammed computer using our present technology would never find, at least not without a sea of false positives along the way. But we can’t possibly do this without input; as the saying goes, garbage in, garbage out, or perhaps more aptly here, /dev/null in, immediate end-of-file out. The Internet presents only a possibility of input, not currently accessible input. If we want to be able to use some information for our creative process, we have to have it currently accessible, and that means having it memorized. Merely knowing that some vaguely defined information that will help us exists somewhere out there in the aether is worthless, because if we don’t know what, exactly, that information is, we aren’t going to be able to recognize that it’s relevant in a new context.

I think we all know this, too. Is it really A-OK that almost 40% of Americans can’t name a single one of their First Amendment rights? How are we supposed to value our most basic freedoms if we can’t even say what they are? Will we be able to recognize that someone’s rights are being violated if we don’t know what their rights are? Being able to look up the text of the First Amendment isn’t good enough – unless you’ve internalized these principles, you’re not going to act differently based on them. (Of course, having the text of the First Amendment memorized verbatim isn’t good enough either if you can’t explain what each part means. But that’s a different matter, even though it’s often confused – rote knowledge that you don’t understand is indeed pointless, but dismissing rote knowledge per se just because undeveloped rote knowledge is pointless leads to devaluing real, properly integrated memorized information as well.)

Here’s a problem I encountered at work the other day. I was trying to get some legacy software to work on a new computer (to give you an idea of how “legacy”, it was old enough that it might well have needed an update to fix the Y2K bug). I ran a command that looked something like this:

myprogram c:\myfolder\myfile.bat

(Which means, roughly: ask myprogram to follow the instructions in myfile.bat.) I got back an error:

‘ÿþc’ not recognized as an internal or external command, operable program or
batch file.

What the heck? I didn’t type anything like that, nor were those characters anywhere in myfile.bat!

I resolved this problem in under 30 minutes. Now, I did use the Internet, as does every IT professional when working on just about every problem, but it was only a small part of the process; I certainly didn’t find a ready-made solution anywhere. Here’s a brief tour through my thought process as I investigated the problem:

  1. Since computers work only in ones and zeroes, text is actually represented by a series of numbers. The mapping between letters (and other characters) to numbers is called a text encoding. Here, myprogram was so old that it only supported the ASCII encoding, which has only 127 different characters, and Latin-1, which has 255 different characters. Naturally, 255 characters isn’t enough to represent every character used in every language in the world, so a variety of different schemes have been invented to include more characters in files, some much better than others.

  2. I had used PowerShell to create myfile.bat, and PowerShell defaults to creating files in the UTF-16 encoding, which is not compatible with ASCII or Latin-1.

  3. UTF-16 files begin with something called a byte-order mark, which is represented by the hexadecimal digits 0xFFFE. Crucially, when those digits are interpreted as Latin-1 encoding, this shows up on the screen as ÿþ. (Interpreting UTF-16 text as Latin-1 text is a meaningless operation that never yields useful results; it makes about as much sense as reinterpreting the letters that make up Roman numerals as English words.)

  4. Then the next character was a c, which is written 0x0063 in UTF-16…

  5. …but in Latin-1 one character is represented by two digits rather than four, and the second byte is read before the first byte, so the next characters read were 63 (C) and 00.

  6. A byte containing only zeroes, called a null byte, is often used in computing to indicate the end of the text. So this old application thought that was the end of the file of commands (before it even got to the second bona fide letter of the first command), tried to find a program called ÿþc to run, and freaked out because there (of course) wasn’t any program called that on my computer.

Solution? Tell PowerShell to write the file using ASCII rather than UTF-16 (a two-minute fix).

Back to the point. You absolutely do not normally need to know anything at all about text encoding to do my job, but being a nerd, I have needed to learn the basics for other projects I’ve undertaken in college and outside of work. Those basics were enough to make me realize I should determine what ÿþ was in hexadecimal (namely, 0xFFFE) and Google that. Once I figured out from that search that it was a byte-order mark, I was able to check the file for that character and realize the file was in the wrong encoding, and the rest followed from there.

But normally you don’t have to know anything about text encoding to do my job, so I’m sure most managers wouldn’t think it was worth learning about it in depth; after all, I could just look it up if it turned out I needed it. But how would I have even realized this issue had to do with text encoding if I wasn’t aware that different encodings existed, that old applications often use different, incompatible encodings, and that it’s possible to look at the raw hexadecimal digits to figure out what’s going on? I would have been working on this issue for hours before I had the slightest clue what was happening. You can learn everything about text encoding that I needed to know to solve this problem in one hour, and I think it’s something every programmer should learn because some related problem – like this one! – is probably going to come bite them in the rear someday. When you know the information before you need it, that gives you the ability to recognize that you need it, which is frequently most of the battle in problem-solving. If you always knew what information you needed to solve a problem, then it would be fine to wait and look it up at that time; but in practice, if you know precisely what information you need to solve a problem, you already have the solution! Unfortunately, the productivity gains from knowing things ahead of time are nearly impossible to quantify, so they are easily left behind in today’s data-driven world.

When you want to solve novel problems that nobody has posted the solutions to on a website you can search with Google, having access to thousands of exabytes of information across the Web with just a few keystrokes is invaluable. But there’s no way you can put it all together without some knowledge of your own. That includes skills and processes, sometimes under the moniker of “critical thinking skills”, which as far as I can tell nobody disputes the importance of yet. But it also includes declarative knowledge (a.k.a., “facts,” or even to a certain extent “rote learning”): there are different text encodings, using the wrong encoding results in weird characters, null bytes are often used to terminate strings, the First Amendment guarantees freedom of religion to all Americans, distance equals rate times time. As painful as it sometimes is to gain and maintain, you are useless without declarative knowledge, and you become more competent the more of it you have. Don’t be afraid to seek it or to insist that others do.

In the months to come, we’ll be getting into many strategies for learning, maintaining, and using declarative knowledge.