Consensus has never come easily in philosophy. It is said that there was only a single occasion when three of the most famous philosophers of the last century, Ludwig Wittgenstein, Bertrand Russell and Karl Popper, were in the same room. Very soon an argument arose and Wittgenstein waved a poker in Popper’s face defying him to name one moral rule. Popper replied “Not to threaten visiting speakers with pokers.” At that, Russell told his former student to put down the poker, and Wittgenstein stormed out.

No doubt this was a better outcome than a poke in the eye, but it is suggestive of something deeply dysfunctional in modern philosophy!

Analytic philosophy is an extensive 20th century philosophic tradition characterised by logic and clarity of thinking, analysis of propositions, precision of language and avoidance of presuppositions. The work of Frege and Bertrand Russell prepared the ground for Wittgenstein. Analytic philosophy has been very important because it represents a back-to-basics movement, a search for truth and certainty in some form or other. However, in its rarefied forms, analytic philosophy has been preoccupied with how language shapes and distorts meaning itself – and not much else.

Disarmed of metaphysics, the defence of the person was founded on the ultimately illogical assertion that rationality and freedom emerge from the determinism of matter. As a result, analytical philosophy seems to have painted itself into a corner.

My pilgrimage however introduced me to one of Oxford’s greatest daughters, Elizabeth Anscombe, Wittgenstein’s straight-talking pupil. In contrast to some in her profession she proclaimed, ‘Philosophy is thinking about the most difficult and ultimate questions.’ As a Catholic convert and mother of seven, she was a woman of courage and conviction.

On my return home I have delved into Elizabeth Anscombe and I believe she can lead us out of the current cul-de-sac of determinism.

(1) _ Philosophy must rediscover the connection between truth and personal virtue._ Anscombe argued in her ground breaking 1958 paper ‘Modern Moral Philosophy’ that, without a notion of human flourishing founded on virtue, ethics is wasted effort. She said we must ‘stop doing moral philosophy until we get our psychology straight’.

(2) _ Forget the ghost in the machine, the famous summary of Descartes’ view of how a human being works._ Philosophy must once and for all discard Cartesian ways of thinking about matter. Anscombe insisted, ‘The divide between matter and mind was drawn differently by the ancients and medievals from the way it is drawn in modern times. So far as I know, the source of the new way of drawing the line is Descartes…’ She goes on to nominate the very passage where the mistake was made in his Second Meditation.

(3) Philosophy must avoid a priori _ atheism and materialism._ Anscombe wrote, “Analytic philosophy is about styles of argument and investigation, and is compatible with belief in God, and Christian belief in God.” In this light it seems ironic that Peter Hacker, Simon Blackburn, Raymond Tallis and David Papineau appear to have opted for a priori atheism, when an a priori stance at its heart is non-analytic.

(4) _ Analytic philosophy must concern itself to seek truth wherever it is to be found._ By her life’s work Anscombe demonstrated that it is possible for analytic philosophy to build bridges to Aristotelian realism and to ethics. Together with her philosopher-husband Peter Geach she is credited with giving inspiration to the field of study known as Analytic Thomism.

Sometimes, when I look at a clock time seems to stand still. Maybe you’ve noticed this to your bemusement or horror as well. You’ll be in the middle of something, and flick your eyes up to an analogue clock on the wall to see what the time is. The second hand of the clock seems to hang in space, as if you’ve just caught the clock in a moment of laziness. After this pause, time seems to restart and the clock ticks on as normal.

It gives us the disconcerting idea that even something as undeniable as time can be a bit less reliable than we think.

The theory is that our brains attempt to build a seamless story about the world from the ongoing input of our senses. Rapid eye movements create a break in information, which needs to be covered up. Always keen to hide its tracks, the brain fills in this gap with whatever comes after the break.

Normally this subterfuge is undetectable, but if you happen to move your eyes to something that is moving with precise regularity – like a clock – you will spot this pause in the form of an extra long “second”. Fitting with this theory, the UCL team also showed that longer eye-movements lead to longer pauses in the stopped clock.

Every day, we face thousands of decisions both major and minor — from whether to eat that decadent chocolate cupcake to when to pursue a new romantic relationship or to change careers. How does the brain decide? A new study suggests that it relies on two separate networks to do so: one that determines the overall value — the risk versus reward — of individual choices and another that guides how you ultimately behave.

“Cognitive control and value-based decision-making tasks appear to depend on different brain regions within the prefrontal cortex,” says Jan Glascher, lead author of the study and a visiting associate at theCaliforniaInstitute of Technology in Pasadena, referring to the seat of higher-level reasoning in the brain.

Study co-author Ralph Adolphs, a professor of psychology at Caltech, explains the distinction by way of a grocery shopping example: “Your valuation network is always providing you with information about what’s rewarding around you — the things you want to buy — but also lots of distracting things like junk food and other items popping into your vision off the shelves.”

Cognitive control is what keeps this network in check. “To be able to get to the checkout counter with what you planned, you need to maintain a goal in mind, such as perhaps only buying the salad you needed for dinner,” says Adolphs. “That’s your cognitive control network maintaining an overall goal despite lots of distractions.”

The overall control of impulses is split between the two networks, though they do not counteract each other. When the cognitive control regions are working well, distractions are ignored and behaviors occur in the appropriate context; when valuation is appropriate, choices are made that are likely to be beneficial in the long run. However, says Adolphs, “when either one of them goes offline, impulsive behaviors get stronger and may not be inhibited.”

Andrei Alexandrescu:

This pattern is common to all great programmers I know: they’re not experts in something as much as experts in becoming experts in something.

The best programming advice I ever got was to spend my entire career becoming educable. And I suggest you do the same.

Mark Bates:

Laziness can be a virtue. There are two types of laziness in this world. Good laziness and bad laziness. Every developer should strive towards being a good lazy developer. Normally laziness has a stigma attached to it, and rightfully so, but in the programming world, laziness can be a real asset.

Erik M. Buck:

Write less code.

Steve Jobs famously observed, “The line of code that is the fastest to write, that never breaks, that doesn’t need maintenance, is the line that you never have to write.”

Obie Fernandez:

The following advice may seem bloody obvious, but too often I must remind myself and others of it: “Take a moment to understand the error message at the top of an exception/stack trace before making additional changes to your code.”

Danny Kalev:

[R]ead much more than you write, and stick to high quality material. Say goodbye to books that insult your intelligence. Instead, aim higher at professional, up-to-date material. That’s the ticket.

Jeremy Likness:

The best programming advice that I ever received can be summarized by the acronym YAGNI, or “You Aren’t Gonna Need It.” This is a very tough principle for many developers to follow because it is easy to get caught up in the excitement of building a rich architecture and providing clever solutions to problems. […]

Simplicity is key to building great software and I believe many developers create solutions that are overly complex. It’s easy get away with when you are running a small team, but when the development team (and typically the product itself) get larger, simplicity becomes more important. It’s not just about a clever solution, but one that is easy to read, learn, and understand.

Eric Lippert:

My manager advised me to pick a relatively narrow area, say, JScript language semantics, and then find questions on the internet and within Microsoft that other people had on that topic. If I could answer the question, I’d answer it. If I couldn’t, then I’d research the question until I could definitively answer it. This paid off in more than just my increased expertise.

Russ Olsen:

In later years, as I found myself building and managing software teams, I’ve realized that there were probably a dozen programmers on that ancient project who knew why the system was so slow and how to fix it. They knew, but they kept it to themselves because in that organization, there were some things that were more important than making the system better. “In the future, stay the hell out of other people’s code,” assumes there will be a future. But the best way to have a future is to be part of a team that values progress over politics, ideas over territory, and initiative over decorum.

Rob Pike:

I recognize this is largely a matter of style. Some people insist on line-by-line tool-driven debugging for everything. But I now believe that thinking—without looking at the code—is the best debugging tool of all, because it leads to better software.

Mark Summerfield:

The first idea is refactoring. This means taking a function or method and making sure that it does just one specific thing. This often involves creating helper functions or methods to take over part of the work the original one did. Each helper should also be refactored so that it too does one specific thing. One advantage of doing this is that refactored functions and methods are smaller and easier to understand. […]

The second idea is “TDD” (Test Driven Design/Test Driven Development).

This involves writing tests before writing an application (or before adding a new feature or doing a change). These tests will naturally fail since what they test hasn’t been done yet. Then the implementation is done and is complete once the tests pass. This sounds like a lot of work if you’re not used to it—but in fact it can save huge amounts of time.

Bill Wagner:

One of the leaders on my earliest projects told me, “Make code usable before you make it reusable.” It’s so easy to get caught up in making something perfect and extensible that sometimes we don’t even make code usable in the first place. Once you’ve got something that satisfies its original purpose, you can see how it may be extended. Until it’s actually used, you can’t know where it could be extended or reused.

We’ll admit that it has been quite a while since we cracked a copy of Dante’s Divine Comedy, but it’s still easy to recognize the nine circles of hell, recreated here in LEGO by Romanian artist Mihai Marius Mihu. The epic project took him seven months to complete, and he used some 40,000 little plastic bricks to realize his vision.

“I didn’t read the Divine Comedy, only the small descriptions of the circles I found on Wikipedia and on other websites,” Mihu has explained. “I didn’t want to be much influenced by the original descriptions because I wanted to give a whole new fresh approach for each circle. I thought more about the significance of titles and from then on it was only my imagination.”

Dante’s _ Divine Comedy _ has been on my to-read list for a while now. One day…

In theBook Notes series, authors create and discuss a music playlist that relates in some way to their recently published book. […]

Charles Yu brings his keen literary sensibility to science fiction yet again in his short story collection, Sorry Please Thank You. This solid group of stories will appeal as much to lovers of science fiction, fantasy, and horror as literary readers who will appreciate their diversity of style and theme.

In his own words, here is Charles Yu’s Book Notes music playlist for his novel, Sorry Please Thank You:

I mostly write at night, after work and putting my kids down to sleep. After a full day of tasks and responsibilities, the inside of my head is sticky and damp, covered with the grime of the day. Not always the best space for experimentation – cluttered. Music helps me wash my mind a bit.

My new collection has 13 stories, and my hope was that they would all be quite different from each other, in tone, and yet somehow connected at the root: voice-based thought experiments in form and tone. Whereas with my previous book, I listened to certain songs over and over again to create a feeling of being stuck in a time loop, with the new book, I sought out new music, trying to broaden my emotional vocabulary, diverse sounds and moods to draw upon and be inspired by.

Here are 13 songs for 13 stories:

• “Code Monkey” – Jonathan Coulton  … [got it, love it]

• “Mandelbrot Set” – Jonathan Coulton  … [got it, like it]

• “Polite Dance Song” – The Bird and the Bee  … [listened to it online - not bad]

• “Where Is My Mind?” – The Pixies  …  [got it, love it]

• “Bloodbuzz Ohio” – The National  … [listened to it online - pretty good]

• “Oh, Maker” – Janelle Monae  …

• “Yoshimi Battles the Pink Robots” – The Flaming Lips …  [got it, love it]

• “Lindisfarne I” and “Lindisfarne II” – James Blake  …

• “The Cure” – Trombone Shorty  …

• “Hey Boy” – The Blow  …

• “Serenade in Blue” – Stan Getz  … [listened to it online - pretty good]

• “Tell Him” – Lauryn Hill  …

An interesting selection of songs. I’ve read and mostly enjoyed Yu’s book, “Amazon.com: How to Live Safely in a Science Fictional Universe: A Novel”. I have about half the songs in my collection, and will probably listen to the others.

Using fMRI, the researchers found that managers’ brains were less active in a number of areas, compared to the brains of non-managers, when doing the same task. By contrast, managerial brains were more active than the others only in one small area (caudate nucleus).

The researchers’ argument hinged on the conclusion that:

While non-managers wasted brainpower on thinking through the task with several areas of their cerebral cortex, the managers (so to speak) downsized their neurological expenditure by outsourcing the work to their caudate nucleus, an area responsible for applying a simple but effective rule.

Another fMRI-based study to take with a pinch of salt. I guess these are still relatively early days in the development of neuroscience. Hopefully, technology and techniques will continue to improve and more solid research findings will follow.

What would it mean to lack free will? It might mean we are merely puppets, our strings pulled by forces beyond our awareness and beyond our control. It might mean we are players who merely act out a script we do not author. Or perhaps we think we make up our stories, but in fact we do so only after we’ve already acted them out. The central image in each case is that we merely observe what happens, rather than making a difference to what happens.

How might neuroscience fit into the story I am telling? Most scientists who discuss free will say the story has an unhappy ending—that neuroscience shows free will to be an illusion. I call these scientists “willusionists.”

There are several ways willusionists reach their conclusion that we lack free will. The first begins by defining free will in a dubious way. Most willusionists’ assume that, by definition, free will requires a supernatural power of non-physical minds or souls: it’s only possible if we are somehow offstage, beyond the causal interactions of the natural world, yet also somehow able to pull the strings of our bodies nonetheless. […] Based on this definition of free will, they then conclude that neuroscience challenges free will, since it replaces a non-physical mind or soul with a physical brain.

But there is no reason to define free will as requiring this dualist picture. Among philosophers, very few develop theories of free will that conflict with a naturalistic understanding of the mind—free will requires choice and control, and for some philosophers, indeterminism, but it does not require dualism. […] [S]tudies strongly suggest that what people primarily associate with free will and moral responsibility is the capacity to make conscious decisions and to control one’s actions in light of such decisions.

But willusionists also argue that neuroscience challenges free will by challenging this role for consciousness in decision-making and action. Research by Benjamin Libet, and more recently by neuroscientists such as John Dylan Haynes, suggests that activity in the brain regularly precedes behavior—no surprise there!—but also precedes our conscious awareness of making a decision to move.

[I]mproved brain imaging technology will likely provide increasingly precise predictions of future behavior. But here’s my prediction: the more complex the decisions and behavior, the more likely such predictions will be based on information about the very neural processes that are the basis of conscious deliberation and decision-making.

By understanding how the most complex thing in the universe—the human brain—works, we can better understand our capacities to make choices and to control our actions accordingly. On this telling of the tale, neuroscience can help to explain how free will works rather than explaining it away.

[F]ascinating research suggests that our conscious reasoning and planning is not pulling the strings as much as we tend to believe. We are subject to biases and influences beyond our awareness, and we sometimes confabulate or rationalize our behavior. But our stories are not always fiction. Other research suggests that our deliberations and decisions can have significant causal influences on what we decide and do, especially when we have difficult decisions to make and when we make complex plans for future action.

Free will is not all-or-nothing. It involves capacities that we develop as we mature, but that have limitations. Recognizing that people have differing degrees of free will can help us better determine when, and to what extent, people are responsible for their actions, and are deserving of praise or blame. Indeed, where it really matters—legal responsibility—it is most useful to understand free will as a set of capacities for reasoning and self-control which people possess to varying degrees and have varying opportunities to exercise.

In this respect, neuroscience and other sciences of the mind can play an important role by providing new insights into our capacities for rationality and self-control, as well as their limitations. We do not write our stories from scratch, but within the context of a complicated world of influences and interactions, our tales are not “full of sound and fury, signifying nothing.”

• WRITE EVERY DAY

Writing is a muscle. Smaller than a hamstring and slightly bigger than a bicep, and it needs to be exercised to get stronger. […]

• DON’T PROCRASTINATE

Procrastination is an alluring siren taunting you to Google the country where Balki from Perfect Strangers was from, and to arrange sticky notes on your dog in the shape of hilarious dog shorts. A wicked temptress beckoning you to watch your children, and take showers. […]

• FIGHT THROUGH WRITER’S BLOCK

The blank white page. El Diablo Blanco. El Pollo Loco. Whatever you choose to call it, staring into the abyss in search of an idea can be terrifying. […]

• LEARN FROM THE MASTERS

Mark Twain once said, “Show, don’t tell.” […]

• FIND YOUR MUSE

Finding a really good muse these days isn’t easy, so plan on going through quite a few before landing on a winner. […]

• HONE YOUR CRAFT

There are two things more difficult than writing. The first is editing, the second is expert level Sudoku where there’s literally two goddamned squares filled in. While editing is a grueling process, if you really work hard at it, in the end you may find that your piece has fewer words than it did before. […]

• ASK FOR FEEDBACK

It’s so easy to hide in your little bubble, typing your little words with your little fingers on your little laptop from the comfort of your tiny chair in your miniature little house. […]

• READ, READ, READ

It’s no secret that great writers are great readers, and that if you can’t read, your writing will often suffer. […]

• STUDY THE RULES, THEN BREAK THEM

Part of finding your own voice as a writer is finding your own grammar. Don’t spend your career lost in a sea of copycats when you can establish your own set of rules. […]

• KEEP IT TOGETHER

A writer’s brain is full of little gifts, like a piñata at a birthday party. It’s also full of demons, like a piñata at a birthday party in a mental hospital. The truth is, it’s demons that keep a tortured writer’s spirit alive, not Tootsie Rolls. Sure they’ll give you a tiny burst of energy, but they won’t do squat for your writing. So treat your demons with the respect they deserve, and with enough prescriptions to keep you wearing pants.

In Reading Like a Writer: A Guide for People Who Love Books and for Those Who Want to Write Them (public library), Francine Prose sets out to explore “how writers learn to do something that cannot be taught” and lays out a roadmap to learning the art of writing not through some prescriptive, didactic methodology but by absorbing, digesting, and appropriating the very qualities that make great literature great — from Flannery O’Connor’s mastery of detail to George Eliot’s exquisite character development to Philip Roth’s magical sentence structure.

Prose offers a timely admonition against the invasion of public opinion in the architecture of personal taste:

Part of a reader’s job is to find out why certain writers endure. This may require some rewiring, unhooking the connection that makes you think you have to have an opinion about the book and reconnecting that wire to whatever terminal lets you see reading as something that might move or delight you. You will do yourself a disservice if you confine your reading to the rising star whose six-figure, two-book contract might seem to indicate where your own work should be heading.