Liberal Arts: The Seed of Apple


In this dark age of deep budget cuts to once-great public universities like the University of Wisconsin, and politicians who pander to their anti-intellectual base by demeaning liberal arts majors while hyping technology majors (see previous post), it may be refreshing to remind ourselves that Steve Jobs himself once stated that what made Apple Computer different from other tech companies was that its goal was to bring a “liberal arts perspective” to computing-

I think our major contribution was in bringing a liberal arts point of view to the use of computers. … You know, if you really look at the ease of use of the MacIntosh, the driving motivation behind that was … to bring beautiful fonts and typography to people … it was to bring graphics to people, not for plotting laminar-flow calculations, but so that they could see beautiful photographs or pictures or artwork. Our goal was to bring a liberal arts perspective, and a liberal arts audience to what had traditionally been a very geeky technology and a very geeky audience. That’s the seed of Apple.

Here is the audio of this quote, from a 1996 Terry Gross interview-

Another often-quoted statement from Jobs on the same subject, which he gave after introducing the iPad in 2005-

It’s in Apple’s DNA that technology alone is not enough — it’s technology married with liberal arts, married with the humanities, that yields us the results that make our heart sing — and nowhere is that more true than in these post-PC devices.

By the way, since I was acquainted with Jobs when we were both students at Reed College, I’m looking forward – with just a wee bit of trepidation – to the new Sorkin/Boyle movie, “Steve Jobs” (even though it might have more to do with the artists who made it than with Jobs the man – something Steve actually might have approved of…)-

Update: Well, I saw the movie, and I’m sorry to say that I can’t recommend it, at least if you’re interested in learning much about the major events it depicts: the release of the original Mac; the (apparently) intentional failure of NeXT; the release of the iMac following Jobs’ return to Apple, and finally the ambivalent Sculley-Jobs relationship (which, as the film handles it, is simply confusing). Nor can I recommend it if you’re more interested in learning about Jobs’ attitude towards his daughter Lisa: first he disowns her, then [spoiler alert!] he finally tries to make amends – a transformation that might have been worth exploring if Sorkin could attribute it to something deeper than Jobs’ merely growing up. The acting, as you might expect, is all fine (Fassbinder really nails Jobs’ persona in the film’s third and otherwise weakest act), and the dialog is certainly pithy enough (Sorkin’s trademark). But the kid I remember from Reed College was far more complex than the character I saw on the screen, and I can’t believe that he lost so much depth and subtlety over time. He certainly might have become as obsessive and inflexible as the film portrays him, but surely he continued to be more than that, at least when he was away from the high-pressure events the film focuses on. To achieve a more satisfying portrait of Jobs the man, a better film would follow him between those events, during many quieter moments, and track his development at a more leisurely pace.

Particle Fever: Supersymmetry versus Multiversal Chaos


You may dimly recall media reports that the so-called “God Particle”, otherwise known as the Higgs boson, had been observed by CERN’s “large hadron collider”. Probably the clearest, most succinct explanation I’ve seen or heard of its importance to particle physics can be found in the the documentary “Particle Fever”. If you are the curious sort (and you must be to be here reading this), I highly recommend it. Here’s the trailer, which you can watch by clicking on “Watch Trailer” below. You can find the whole film on NetFlix (if you have a subscription), on iTunes, or rent it on Vimeo by clicking on the bouncing icon.

Feel Like Feeling Small?


Here’s a great excursion into a 4.3 GB image of the Andromeda Galaxy, our nearest galactic neighbor, taken 1/5/15 and brought to you by the NASA/ESA Hubble Space Telescope. Watch it on as high a resolution as possible; at 720p watching it fullscreen becomes reasonable. I recommend pausing the video playback every once in a while; when you get a stable, in-focus frame, you can better appreciate the density of the star field your watching.

If you’re having trouble watching it here, try clicking on the YouTube button and watching it there. (YouTube embeds have been hit and miss for some time now. I’ve found FireFox to be slightly more reliable than Safari or – surprisingly – Chrome in that department).

By the way, Andromeda is heading our way at about 110 kilometers per second. It is expected to merge with our own Milky Way in about four billion years…

The Ethics Of Facebook’s Emotion-Manipulation Research


I’ve railed against Facebook many times on this blog, and in 2010’s “Facebook: Beyond The Last Straw“), I promised I would stop. I managed to keep that promise for nearly four years, but I’ve been roused to rail once again by the confluence of four different interests I happen to have: emotion research (one of my philosophical activities), ethics (a subject I teach), federal regulations covering university research (which I help to administer by being on my university’s Institutional Research Board) and the internet (which, of course, I constantly use).

In case you haven’t yet heard, what Facebook did was to manipulate the “news feeds” users received from their friends, eliminating items with words associated with either positive emotions or negative emotions, and then observing the degree to which the manipulated users subsequently employed such positive or negative vocabulary in their own posts. Facebook’s main goal was to disconfirm a hypothesis suggested by previous researchers that users would be placed in a negative mood by their friends’ positive news items, or in a positive mood by their friends’ negative news items. As I understand it, the results did disconfirm that hypothesis, and confirmed the opposite one (namely, that users would be placed in congruent rather than incongruent mood states by reading their friends’ positive or negative news items), but just barely.

Although I find this methodology questionable on a number of grounds, apparently peer-reviewers did not. The research was published in a reputable journal. More interesting to me are the ethical implications of Facebook’s having used their users as guinea pigs this way.

The best article I’ve found on the net about the ethical issues raised by this experiment was written as an opinion piece on Wired by Michelle N. Meyer, Director of Bioethics Policy in the Union Graduate College-Icahn School of Medicine at Mount Sinai Bioethics Program. Meyer is writing specifically about the question of whether the research, which involved faculty from several universities whose human-subject research is federally regulated, could have (and should have) been approved under the relevant regulations. Ultimately, she argues that it both could have and should have, assuming that the manipulation posed minimal risk (relative to other manipulations users regularly undergo on Facebook and other sites). Her only caveat is that more specific consent should have been obtained from the subjects (without giving away the manipulation involved), and some debriefing should have occurred afterward. If you’re interested in her reasoning, which at first glance I find basically sound, I encourage you to read the whole article. Meyer’s bottom line is this-

We can certainly have a conversation about the appropriateness of Facebook-like manipulations, data mining, and other 21st-century practices. But so long as we allow private entities to engage freely in these practices, we ought not unduly restrain academics trying to determine their effects. Recall those fear appeals I mentioned above. As one social psychology doctoral candidate noted on Twitter, IRBs make it impossible to study the effects of appeals that carry the same intensity of fear as real-world appeals to which people are exposed routinely, and on a mass scale, with unknown consequences. That doesn’t make a lot of sense. What corporations can do at will to serve their bottom line, and non-profits can do to serve their cause, we shouldn’t make (even) harder—or impossible—for those seeking to produce generalizable knowledge to do.

My only gripe with this is that it doesn’t push strongly enough for the sort of “conversation” mentioned in the first line. The ways in which social media sites – and other internet sites – can legally manipulate their users without their specific consent is, as far as I can tell, entirely unregulated. Yes, the net should be open and free, but manipulation of the sort Facebook engaged in undermines rather than enhances user freedom. We shouldn’t expect to be able to avoid every attempt to influence our emotions, but there is an important difference between (for instance) being exposed to an ad as a price of admission, and having the information one’s friends intended you to see being edited, unbeknownst to you or your friends, for some third party’s ulterior purpose.

Occam’s Razor And Malaysia Airlines Flight 370


Who knows, at this point, just what happened to Malaysia Airlines Flight 370? Only one thing seems clear as the media has ascended to ever-higher flights of fancy about it: no one seems to want to ruin a good yarn by promoting the simplest available hypothesis (as Occam’s razor would prescribe), no one, that is, except Chris Goodfellow in this Wired post

The left turn is the key here. Zaharie Ahmad Shah was a very experienced senior captain with 18,000 hours of flight time. We old pilots were drilled to know what is the closest airport of safe harbor while in cruise. Airports behind us, airports abeam us, and airports ahead of us. They’re always in our head. Always. If something happens, you don’t want to be thinking about what are you going to do–you already know what you are going to do. When I saw that left turn with a direct heading, I instinctively knew he was heading for an airport. He was taking a direct route to Palau Langkawi, a 13,000-foot airstrip with an approach over water and no obstacles. The captain did not turn back to Kuala Lampur because he knew he had 8,000-foot ridges to cross. He knew the terrain was friendlier toward Langkawi, which also was closer.

For me, the loss of transponders and communications makes perfect sense in a fire. And there most likely was an electrical fire. In the case of a fire, the first response is to pull the main busses and restore circuits one by one until you have isolated the bad one. If they pulled the busses, the plane would go silent. It probably was a serious event and the flight crew was occupied with controlling the plane and trying to fight the fire. Aviate, navigate, and lastly, communicate is the mantra in such situations.

What I think happened is the flight crew was overcome by smoke and the plane continued on the heading, probably on George (autopilot), until it ran out of fuel or the fire destroyed the control surfaces and it crashed. You will find it along that route–looking elsewhere is pointless.

If you’re curious about the details of this relatively reasonable hypothesis, read the whole story.

Fun With Frax


If you own an iPhone or an iPad and have ever had the urge to release your inner graphic artist… the one you’ve kept hidden since you were 5 years old and discovered you couldn’t draw worth s#*t… then you absolutely must shell out a whopping $2 and download a copy of Frax, the amazing little iOS app that allows you to create an infinite variety of fractal images using gestures, tilt, and a few simple controls. Once created, you can save your masterpieces to your photo library, or upload them to the Frax Cloud and have them rendered in ultra-high, poster-sized resolution.

Here’s an image I created in about five minutes, just trying to learn how to use the app (which has very helpful instructions embedded into it). To see what people who actually know what they’re doing with the app can come up with, check out the gallery at the Frax site.

First Try With Frax

President Obama’s Climate Change Speech


Amazingly, at least in my neck of the woods, only one cable news network covered President Obama’s full speech on climate change policy today: Fox Business, which immediately followed it with an interview of an electric company executive who is investing heavily in coal. Interestingly, even he had to be goaded by the hosts into criticizing the policies outlined in the speech. The other cable news providers, even MSNBC, had other stories that they deemed more important. So I’m linking to a video of the speech here, and I would urge everyone who has heard about it only second-hand to watch it. Although it’s not one of Obama’s best-delivered speeches, what with the heat in D.C. today (not a bad way of setting the scene, given the speech’s topic), the policies outlined in it are significant.

The President’s failure in the past to act more decisively and to speak more explicitly on the climate problem has disappointed me. I’m no longer disappointed.

Are You A Boltzmann Brain?


If you’re a Boltzmann brain, then I’m likely a figment of your imagination as you float around in otherwise empty (or at least high-entropy) space, a minimum assemblage of whatever matter or energy is required to generate your thoughts and images. You emerged as a “quantum fluctuation” of particles out of the quantum fields that underlie space itself – your mother was the vacuum (no offense intended). Yes, you were an unlikely fluctuation, but given enough time – and an eternity is more than enough time – you were bound to happen at some point. In fact, at least absent assumptions far more speculative and untested than those of statistical mechanics and quantum physics, it was far more likely that you would emerge as an isolated brain – or whatever assemblage of particles you really are – in infinite space than that the Big Bang would have occurred with just the right properties to give rise to the universe as we observe it.

The idea that you could be mistaken about everything except the fact of your own bare existence as a conscious mind is nothing new. In his Meditations, Descartes developed such a scenario on his way convincing himself that his own mind certainly existed, and hence (along with several controversial assumptions) that a benevolent, omnipotent God must exist, and therefore that our everyday beliefs about the physical world are highly likely to be true (as long as we form them carefully). To make his skeptical scenario psychologically vivid and a worthy antagonist to defeat, Descartes imagined that a malevolent demon might be deceiving him in every possible way. Of course, Descartes recognized that his demon scenario was utterly improbable, but since in his view knowledge had to be built on an absolutely certain foundation, he thought that the mere possibility of such a demon could undermine his previously uncritical faith in his common sense beliefs, and that showing that such a demon could not cause him to reasonably doubt his own existence would go a long way towards establishing a firm foundation for math, physics, and the other sciences. Critics, of course, love to point out that a mere possibility is insufficient to justify a reasonable doubt. It is possible that a mountain of gold will soon emerge in my back yard, but that mere possibility gives me no reason to doubt that I shouldn’t quit my day job just yet. The possibility of a demon similarly can provide no reasonable ground for doubting my common sense beliefs. By contrast, the disturbing aspect of the Boltzmann brain scenario is that our best-tested physical theories actually suggest that being a Boltzmann brain is not only possible, it’s actually more likely – much more likely – than the situation in which we believe ourselves to be.

To explain why we observe a relatively orderly, amenable universe around us, even though a higher-entropy, less amenable sort of universe is far more likely to emerge from the cosmos on purely statistical grounds, we naturalists often appeal to an “anthropic principle”: in an infinite universe, some regions are likely to be more amenable to life than others, and life will quite predictably exist only in those regions where its evolution is possible. But the statistical reasoning that supports the probability of your being a Boltzmann brain also undercuts such appeals to anthropic principles. Sean Carroll puts this nicely in his book, “From Eternity To Here”-

… Maybe, we might reason [in accordance with an anthropic principle], in order for an advanced scientific civilization such as ours to arise, we require a “support system” in the form of an entire universe filled with stars and galaxies, originating in some sort of super-low-entropy early condition. Maybe that could explain why we find such a profligate universe around us.

No. Here is how the game should be played: You tell me the particular thing you insist must exist in the universe, for anthropic reasons. A solar system, a planet, a particular ecosystem, … whatever you like. And then we ask, “Given that requirement, what is the most likely state of the rest of the universe [given statistical mechanics and quantum theory], in addition to the particular thing we are asking for?”

And the answer is always the same: The most likely state of the rest of the universe is to be in equilibrium. If we ask, “What is the most likely way for an infinite box of gas in equilibrium to fluctuate into a state containing a pumpkin pie?,” the answer is “By fluctuating into a state that consists of a pumpkin pie floating by itself in an otherwise homogeneous box of gas.” Adding anything else to the picture, either in space or in time – an oven, a baker, a previously existing pumpkin patch – only makes the scenario less likely, because the entropy would have to dip lower to make that happen.

It’s important to emphasize that Carroll’s point here isn’t to argue that we should in fact believe that we are Boltzmann brains, but rather to provide a sort of reductio ad absurdum of the limited set of assumptions and theories that lead us to that conclusion. Still, upon finishing Carroll’s book, which avoids the Boltzmann brain conclusion only by indulging in some extremely tentative cosmological speculations, it’s hard to simply dismiss the possibility that we are, in fact, Boltzmann brains.

Is Mitt Romney A Probability Wave?


From David Javerbaum’s amusing opinion piece in last Sunday’s New York Times, entitled “A Quantum Theory of Mitt Romney“-

Before Mitt Romney, those seeking the presidency operated under the laws of so-called classical politics, laws still followed by traditional campaigners like Newt Gingrich. Under these Newtonian principles, a candidate’s position on an issue tends to stay at rest until an outside force — the Tea Party, say, or a six-figure credit line at Tiffany — compels him to alter his stance, at a speed commensurate with the size of the force (usually large) and in inverse proportion to the depth of his beliefs (invariably negligible). This alteration, framed as a positive by the candidate, then provokes an equal but opposite reaction among his rivals.

But the Romney candidacy represents literally a quantum leap forward. It is governed by rules that are bizarre and appear to go against everyday experience and common sense. To be honest, even people like Mr. Fehrnstrom who are experts in Mitt Romney’s reality, or “Romneality,” seem bewildered by its implications; and any person who tells you he or she truly “understands” Mitt Romney is either lying or a corporation.

Javerbaum goes on to argue (in an admirably concise and facile way) that Romneality illustrates all of the major concepts of quantum theory: complementarity, probability, uncertainty, entanglement, noncausality, and duality. Read it for yourself, and marvel at Mitt Romney’s phenomenal awesomeness!

(Thanks Nathan).

Thinking While Driving


According to a story today in USA Today, talking or texting on a cell phone isn’t the only way to endanger yourself and others on the road. Concentrated thinking about anything causes similar distraction, at least if these researchers are correct-

The group, led by Bryan Reimer, a research scientist at MIT’s AgeLab, found that a driver’s ability to focus on the driving environment varies depending on the “cognitive demand” of a non-driving activity. That is, the deeper the level of thought in a driver’s mind, the less he focuses on his surroundings.

Good drivers routinely scan the road ahead and around them, looking for potential hazards that they might need to react to. When drivers face even light levels of cognitive demand, they scan the road less, Reimer says.

“In the past, the emphasis was on whether you’re distracted or not distracted,” he says. “This is too simple of a categorization. There are levels of cognitive demand, and those levels are statistically distinguishable.

“The level of thought going on has a relationship to how much a driver is aware of the driving environment,” he says.

Thinking while driving: there should be a law against it. Maybe when a cop pulls you over, the first thing he or she should look for isn’t an open container, but an open book (or an audio book), or any other tell-tale trace of thoughtfulness. College professors, of course, should be immediately suspect.

Living In The Anthropocene Era


Given the dim-witted ignorance of science currently being manifested by prominent Republicans (from Rick Santorum’s cynical climate-change denial to Rush Limbaugh’s obvious misconceptions of how the birth control pill works), it was refreshing to read a story this week in a magazine as popular as Time outlining very concisely the influence billions of human beings are having on the natural world-

For a species that has been around for less than 1% of 1% of the earth’s 4.5 billion-year history, Homo sapiens has certainly put its stamp on the place. Humans have had a direct impact on more than three-quarters of the ice-free land on earth. Almost 90% of the world’s plant activity now takes place in ecosystems where people play a significant role. We’ve stripped the original forests from much of North America and Europe and helped push tens of thousands of species into extinction. Even in the vast oceans, among the few areas of the planet uninhabited by humans, our presence has been felt thanks to overfishing and marine pollution. Through artificial fertilizers – which have dramatically increased food production and, with it, human population – we’ve transformed huge amounts of nitrogen from an inert gas in our atmosphere into an active ingredient in our soil, the runoff from which has created massive aquatic dead zones in coastal areas. And all the C02 that the 7 billion-plus humans on earth emit is rapidly changing the climate – and altering the very nature of the planet.

Human activity now shapes the earth more than any other independent geologic or climatic factor. Our impact on the planet’s surface and atmosphere has become so powerful that scientists are considering changing the way we measure geologic time. Right now we’re officially living in the Holocene epoch, a particularly pleasant period that started when the last ice age ended 12,000 years ago. But some scientists argue that we’ve broken into a new epoch that they call the Anthropocene: the age of man. “Human dominance of biological, chemical and geological processes on Earth is already an undeniable reality,” writes Paul Crutzen, the Nobel Prize-winning atmospheric chemist who first popularized the term Anthropocene. “It’s no longer us against ‘Nature.’ Instead, it’s we who decide what nature is and what it will be.”

To carry this line of reasoning one step further: with the advent of genetic engineering and the growing understanding of human psychology and neurology, it is also we who might decide what human nature will be. That old Existentialist adage, “existence precedes essence,” used to apply just to one’s own self-understanding; the suggestion was that, from the subjective viewpoint of lived experience, one first finds oneself existing, and then discovers that one’s “essence” or “nature” follows from what one chooses to do. But now it appears that this might soon become true not just from a subjective point of view, but also from an objective, scientific one: it seems that not only human nature, but also nature itself has become our responsibility. And given our track-record up to now, this should certainly cause some angst.

Climate Change’s Closing Door


The Guardian reported last November that, judging by “the most thorough analysis yet of world energy infrastructure”, we likely have little more than five years left to put a lid on carbon emissions before losing the chance of avoiding serious climate change. Although I’ve never been tempted to doubt the scientific consensus on climate change, I think I’ve been relying on wishful thinking to avoid feeling too anxious about it (probably like almost everyone else), but I’ve got to admit that these warnings are starting to get to me-

The world is likely to build so many fossil-fuelled power stations, energy-guzzling factories and inefficient buildings in the next five years that it will become impossible to hold global warming to safe levels, and the last chance of combating dangerous climate change will be “lost for ever”, according to the most thorough analysis yet of world energy infrastructure.

Anything built from now on that produces carbon will do so for decades, and this “lock-in” effect will be the single factor most likely to produce irreversible climate change, the world’s foremost authority on energy economics has found. If this is not rapidly changed within the next five years, the results are likely to be disastrous.

“The door is closing,” Fatih Birol, chief economist at the International Energy Agency, said. “I am very worried – if we don’t change direction now on how we use energy, we will end up beyond what scientists tell us is the minimum [for safety]. The door will be closed forever.”

If the world is to stay below 2C of warming, which scientists regard as the limit of safety, then emissions must be held to no more than 450 parts per million (ppm) of carbon dioxide in the atmosphere; the level is currently around 390ppm. But the world’s existing infrastructure is already producing 80% of that “carbon budget”, according to the IEA’s analysis, published on Wednesday. This gives an ever-narrowing gap in which to reform the global economy on to a low-carbon footing.

If current trends continue, and we go on building high-carbon energy generation, then by 2015 at least 90% of the available “carbon budget” will be swallowed up by our energy and industrial infrastructure. By 2017, there will be no room for manoeuvre at all – the whole of the carbon budget will be spoken for, according to the IEA’s calculations

Funny, I don’t remember hearing about the IEA report via American mass media, although given how little time they spend on reporting scientific findings, I guess that shouldn’t surprise me. I’m still patiently waiting for an intrepid reporter at one of the zillion Republican debates to challenge Rick Santorum on his explicit climate change denial in the face of the ever mounting evidence. He did say recently: “You hear all the time, the left – ‘Oh, the conservatives are the anti-science party.’ No we’re not. We’re the truth party.” Surely that invites a polite question on what he means by “the truth” here, and how he would go about establishing it…

By the way, lest you think that the IEA is some liberal advocacy group whose studies can’t be trusted, it’s actually an international organization with 28 member states, including all of the following-

Put This Anecdote In Your Pipe And Smoke It


Montana Public Broadcasting has produced an interesting documentary on the controversy surrounding medical marijuana. Critics will no doubt cite the anecdotal nature of the evidence in favor of medical use, but supporters will point out the weaknesses of the objections to medical use and the apparent inconsistencies in the federal government’s policies. Did you know, for instance, that federal law allows Schedule II drugs – which include methamphetamine, cocaine, opium and morphine – to be prescribed for medical purposes, but not marijuana, which is listed on Schedule I as a drug with a high potential for abuse but no medical use?

We embed, you decide-

Watch the full episode. See more MontanaPBS Presents.