Recently
New light, Seattle

Code

To get started on some of the themes I have in mind for this year, I started learning and writing some Haskell. It’s not like the languages I’m used to writing, and it’s fun how it bends my mind and turns it inside out.

I’m solving Project Euler problems using Haskell, and I’ve already finished four of them. Four whole problems. There are currently 619 problems in the project.

Reading

Books

  • Conquest of Happiness by Bertrand Russell.

    Russell’s writing is the simplest, the most straightforward I’ve ever read. Published in 1930, but it might as well have been written two years ago. The problems are the same, and the things we get unhappy about are the same. It might not tell you much you don’t already know, but there’s a difference between what you know that you know, and what you know.

Listening

Trespassers William - Different Stars (2002)

If you pick only one album to try, pick this one.

Different Stars

“Soon It Will be Cold Enough” by Emancipator (2008)

Soon It Will be Cold Enough

“Thao & Mirah” by Thao & Mirah (2011)

Thao & Mirah

“Like the Linen” by “Thao Nguyen” (2005)

Like The Linen

“Art:Work” by ANIMA! (2017)

ANIMA

“ALONE OST” by Rob Allison (2014)

The soundtrack from the game Alone by Laser Dog. Alone is one of the first games by Laser Dog that I played, but I like all of their games. Each game is different from the rest, but they all share a recognizable artistic happy core.

ALONE

1489327bb42ffc239223ea291b093dd3fe012453
'Take no one's word for it' in 2017

Brett Terpstra wrote in a recent post:

I’ve had a few really good coding days in a row here, which has meant not as much blogging this week.

Me, I’ve had a few really good coding months in a row here. Which has meant not as much blogging this year.

In my review of 2016, I was happy about progress and hoping to write more:

I’m moving to a new country and starting in a new research scientist role in 2017, and one way or another I think that will affect my writing here. What I hope will happen is that I’ll be able to write more about science and data as I learn more things faster in my new position.

It didn’t work out this way. I’ve learned and done more over the past 12 months than any other two to three years combined. But all that work wasn’t open source, and was for a team for which confidentiality matters, which means I couldn’t write about any of it. It also left me with much less time for writing than I used to have.

In 2017 I did a lot more coding, a lot more data analysis and machine learning, a lot more scripting and automating, a lot less bike riding, a lot less listening to music, and moderately less reading. It was a year well-spent, but also a year of strong tradeoffs.

So here is Take no one’s word for it in 2016:

Posts by month and year

Total posts by year

Words by month and year

Total words by year

Future

To quote again from my review of 2016:

The knock on new year’s resolutions is that they encourage you to wait until a seemingly arbitrary moment in time before you make a big change or do something to make your life better. Another knock is that this encourages you to attempt large changes instead of piecemeal changes, which increases the amount of discipline required for success, and therefore increases the chances of failure. Larger changes would happen less frequently, and that makes error-correction harder.

I think there is truth in there, but as with a lot of things people criticize today, the criticism loses a lot of nuance or selectivity and becomes absolute. You shouldn’t wait until new year’s to make your life better, but setting checkpoints for retrospectives and projections at regular intervals is useful. New year’s is arbitrary, but no more arbitrary than any other time or date if you don’t have better reasons for them. Just make sure you’re not using it as an excuse to procrastinate.

I have my personal plans and goals for 2018, but I am wary of making them too specific. I couldn’t have planned for most of 2017 (I wrote a lot less than I thought I was going to, for one), and even though I expect 2018 to be less eventful, I can’t set my tradeoffs for a whole year. I do have a theme in mind, though, and I will try to apply this theme by planning one month at a time.

6b9e6588c1f996453c1754ffd62f865f466559ea
My early computing

I remember the exact moment I discovered that you could “click” a highlighted button by pressing the spacebar.

The relative of a friend of the family was sitting in front of our computer, installing Visual Studio 6. I had been waiting for this for a long time.


I want to write about Visual Basic, but I don’t see a path to that unless I first write about DOS. But I don’t know how to get to DOS without writing about the first computer I – my family – ever owned.

Many people have some form of old Apple or Macintosh machine as their first computer, but mine was a no-name grey Pentium tower that ran two versions of Windows: XP and 2000.

Today I can sit back and pontificate about how Mac OS X was a revelation when I saw it for the first time in its Leopard skin, and how ever since I got a white Intel MacBook in 2007 I can’t use Windows without getting a metallic aftertaste in my soul. But back in the early 2000s, I was in love with the computer I had in front of me, and that computer ran Windows.

I spent hours in front of that machine. There was no one to teach me, and we didn’t have internet, so I just clicked on everything I could click on and tried to figure out what the hell it did.


I have a big extended family, and all of them belong to one occupation, except one of my uncles, who is in IT. He had a side business putting computers together and selling them, and always had the coolest looking things lying around; screwdriver sets, SATA cables, empty motherboard cases, empty anti-static bags. I wanted to be around that stuff all the time.

He had a computer than ran Windows 95, and he had games which he would let me play sometimes, but he didn’t trust me to click around his Windows environment. So he taught me how to boot into DOS and launch the games from the command line.1


My uncle had a black Visual Basic 3 book that he didn’t need or use because it wasn’t 1993 anymore, so I took it.2 I read the book and understood that you could drag tools onto a window, give those tools names, then write some code to define what happens when you interacted with them. I read about this, but where could I actually do it? This was a Visual Basic 3 book!

A screenshot of my first programming world. Courtesy The Register

I forget how I discovered that Microsoft Office had some scripting section that not only let you write Visual Basic, but create GUI programs! That was so awesome, and I did a lot of my early stumbling and learning in it.3 Then at some point an opportunity presented itself and we got a relative of a family friend who owed us a favor to install Visual Studio. He came over, opened his CD case, popped the Visual Studio CD into the disc tray, and went through the installation wizard. I could see the dialog windows as he went through them, and I realized he was advancing not by clicking the buttons, but by hitting the spacebar! Whoa….


Visual Basic gets made fun of a lot, I think, and I haven’t spent a lot of time trying to figure out whether there is merit to the mockery. But I do know that it was a very easy language to learn for someone who only had an ancient book to learn from. The only comparable development environment I know is Xcode, and it’s way more complicated to handle a multi-view Xcode project than it is to create a multi-form Visual Basic one.


I remembered learning that the spacebar clicked the highlighted button because I’ve been thinking about why I like computing so much. Why did I spend so much time running DOS commands I read about online? Why do I find intrinsic value in creating tools and automatic things? Of course I’m not the only one, but we could all have different reasons and I can only try to figure out mine. I still don’t have a good answer, but it connects all the way back to that spacebar.

  1. It’s funny to look back at that. A child is not trusted to not break things, so we steer them away from Windows and unleash them on the command line instead! 

  2. That was a great book. I spent a lot of time trying to find a photo of the book or its cover, but no luck. 

  3. I also spent some time trying to find what that Office Visual Basic tooling was called. My best guess is Visual Basic for Applications although this is not based on my memory. 

c26e6e0d596440b94755a5c6c6f74344d4bdbe14
Mapping my Kitchener-Waterloo bike rides
My rides, with darker shades indicating higher heart rates

I rode a lot when I lived in Kitchener-Waterloo. The weather sucked for at least half the year, but I had time and I love being on the bike. I recorded almost all my rides with a Garmin 500. Without any extra sensors, the Garmin records GPS coordinates, altitude, and speed. I put sensors on my bike that allowed it to record cadence (pedal turns per minute) and more accurate speed data (by relying on wheel rotations instead of movement). I also wore a heartrate monitor most of the time. Each ride ends up as a .fit file that I copied to my computer and uploaded to Strava.

Tom MacWright wrote a post about creating a map of his runs in Washington D.C., and I wanted to try doing the same with my cycling data.

The fit R package by Alex Cooper does the hard work of parsing each .fit file to give a clean dataframe with cadence, distance, speed, altitude, and heartrate metrics for each timestamp.1 Thanks to Alex, all I have to do is merge the data from all the files while excluding ones that don’t have relevant data.2

First, some library loading and setup.

library(dplyr)
library(fit)
library(ggmap)
library(magrittr)
library(svglite)
library(viridis)

Aside from the fit package that parses the files, I use the ggmap package to overlay the rides on top of the town’s Google Map roads.

There are some limitations to using Google Maps. You have to be careful about hitting a rate limit while developing and rerunning the script, Google watermarks the maps with their logo at the bottom right and left corners, and – and this is the worst – zoom settings are way too coarse. The maps below are at zoom level 10, which is too far and has a lot of unused map area. But zoom level 11 is too close and cuts off many of the routes. I could technically get around both issues by cropping to get rid of empty space and remove the logos, but I suspect that violates some usage policy. Finally, higher resolution maps (i.e, scale = 4) are reserved for business users only.

That said, the ggmap R package is convenient and easy to use. I tried OpenStreetMap and it looks promising, but I still haven’t figured out if I can get the kind of map I want out of it, and how to do it.

Now we read the ride files from _map directory and discard any that don’t have the right shape.3

ride_files <- dir('_map/')

check_vars <- function(fit_file) {
    ride <- read.fit(file.path('_map', fit_file))
    # checking number of columns is a 'bad smell' in the future-proofness of this code
    if (ncol(ride$record) == 9) {
        return(ride$record)
    } else {
        return(NA)
    }
}

rides <- do.call('rbind', lapply(ride_files, function(x) { check_vars(x) }))

points <- rides %>%
    na.omit() %>%
    filter(heart_rate < 190)

centers <- c(mean(points$position_long) + .04, mean(points$position_lat) - .08)

With all the files parsed into one dataframe, and the longitude and latitude centers of the map calculated, we can get the base layer Google Map of the town.

map <- get_googlemap(center = centers,
                     zoom = 10,
                     scale = 2,
                     color = 'bw',
                     style = c(feature = "all", element = "labels", visibility = "off"),
                     maptype = c('roadmap'),
                     messaging = FALSE)

Let’s make some maps! Going forward, color/shade variation indicates heartrate.

Google Maps with color heatmap overlay:

ggmap(map, extent = 'device') +
    geom_path(aes(x = position_long, y = position_lat, color = heart_rate), data = points, size = .7) +
    scale_color_viridis(option = 'inferno', guide = FALSE) +
    theme_void()

plot of chunk gmap-color

Google Maps with greyscale heatmap overlay:

ggmap(map, extent = 'device') +
    geom_path(aes(x = position_long, y = position_lat, color = heart_rate), data = points, size = .7) +
    coord_map("mercator") +
    scale_colour_gradient(low = "white", high = "black", guide = FALSE) +
    theme_void()

plot of chunk gmap-grey

Naked routes with color heatmap:

ggplot(points, aes(x = position_long, y = position_lat, color = heart_rate)) +
    geom_path() +
    coord_map("mercator") +
    scale_color_viridis(option = 'inferno', alpha = .2, guide = FALSE) +
    theme_void()

plot of chunk nomap-color

Naked routes with greyscale heatmap:

ggplot(points, aes(x = position_long, y = position_lat, color = heart_rate)) +
    geom_path() +
    coord_map("mercator") +
    scale_colour_gradient(low = "white", high = "black", guide = FALSE) +
    theme_void()

plot of chunk nomap-grey

… + minimal coordinate information

ggplot(points, aes(x = position_long, y = position_lat, color = heart_rate)) +
    geom_path() +
    coord_map("mercator") +
    scale_color_viridis(option = 'inferno', alpha = .2, guide = FALSE) +
    xlab('long') +
    ylab('lat') +
    theme_minimal() +
    theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank())

plot of chunk nomap-coords

That was fun!

Next steps: save my favorite as an svg file and get a nice-looking print.

  1. I assume it would also give power data if I had it. 

  2. I record my training on resistance trainers too, but that data is irrelevant because it’s all stationary. 

  3. As I mention in the code comment, validating a data file by counting columns is a bad smell. I think it hurts the generalizability and reusability of the code. That said, this post idled in draft status for way too long, and given that I don’t foresee myself doing more training rides in Kitchener-Waterloo again, this code is largely custom-written for data that is fully collected. Sometimes you do things the ideal way, and sometimes you ship. 

1c51b570cdd89a554b2d0e51aa777b21b7c30bd1
What I wish I had known

I recently wrote a blog post for the Britt Anderson Group about personal development in academia and what one could do with a psychology degree and coding skills.

I took two courses, An Introduction to Methods in Computational Neuroscience and Human Neuroanatomy and Neuropathology with Dr. Britt Anderson1, both of which are among the very best formal education experiences I’ve ever had2. I was also lucky enough to publish a paper with him. Very few people have had as much of an impact and influence on my work, personality, and intellect as Dr. Anderson has. I’m fortunate to have met him during my academic career, and was honored to write a post for his lab’s site. He was my mentor during graduate school without even knowing it.

The post is titled “What I Wish I Had Known: Advice from a Former Psychology Graduate Student” and you can read it in full here.

That said, I wanted to quote the first and second-last paragraphs here, because they fit Take no one’s word for it pretty nicely.

I wanted to write about advice I would give my past self while completing my psychology degrees, but before that, I want to begin with a caveat: each person’s journey is different, and the more you can bring yourself to not feel the pressure to follow any person’s particular advice or prescribed steps, the easier it might be to find your path to what comes next. That was hard for me at first, but you get better at it the more you try.

[…]

This post included a lot of dry thoughts and recommendations, so let me end on a different note: the best thing you can do for yourself, no matter your goals or interests, is to realize that you can learn anything and get really good at whatever you set your mind to. It’s never too late, and the lack of formal training is no deal breaker, and might in fact make things easier. The hardest part is to start, and once you do, the second hardest thing is to keep a schedule of learning and practice.

[…]

  1. MD. A real doctor. 

  2. In the first I implemented the Hodgkin-Huxley model of the neuron in C and Excel (yeah…), and in the second I got to handle and examine brains, skulls, and human cadavers donated to scientific research. That last one was a profound experience. 

7f1e81796bb6582c91dc88e4c674ef27aeb2f7c2
Nvim-R for people who love R, vim, and keyboards

For love or money, I write R code almost every day.

Back when most of the R code I wrote was for personal projects or academic research, I worked in RStudio. That’s when I wrote R for love.1

Once I started writing more R code for money, I wanted to find alternatives to RStudio, and unfortunately, there aren’t any. RStudio is without competition in the GUI side of the market.

I don’t remember how I found Nvim-R, but I did and now I can’t imagine working without it. It has become my favorite and default way to write R, no matter the size or the complexity of the script. I use Nvim-R with neovim, but it works with vim too.

Nvim-R screenshot

Install it with your package manager of choice; I use Vundle:

Plugin 'jalvesaq/Nvim-R'

Once you have a .R file in the buffer, you can invoke Nvim-R with the default shortcut <LocalLeader>rf. Nvim-R will open a vim shell window that runs the R console. The configuration I ended up with – see screenshot above and vim configuration code below – puts the source editor in the top left, the object browser in the top right, and the console in the bottom. You can set it up so that hitting space bar in normal mode sends lines from the source editor to be executed in the console. Want to send many lines? This function also works with highlighted lines in visual mode.

Speaking of the object browser: Nvim-R’s object browser is similar to RStudio’s. It shows you the R objects in the environment and updates itself as objects are created and removed. Except I like Nvim-R’s object browser better. It assigns different colors to objects of different types, and it will, by default, expand the dataframes to list their columns underneath them (you can turn that off so that you expand the dataframes you want by hitting Enter on their entry).

I rarely read vim plugin documentation because the features and options I need are often general enough to be described in the READMEs, but if you’re considering using Nvim-R, I highly recommend reading its help/docs. They’re well-written and they’ll help you customize your development environment to your exact specifications: :help Nvim-R.

These are my settings, and they only give a taste of how customizable Nvim-R is:

" press -- to have Nvim-R insert the assignment operator: <-
let R_assign_map = "--"

" set a minimum source editor width
let R_min_editor_width = 80

" make sure the console is at the bottom by making it really wide
let R_rconsole_width = 1000

" show arguments for functions during omnicompletion
let R_show_args = 1

" Don't expand a dataframe to show columns by default
let R_objbr_opendf = 0

" Press the space bar to send lines and selection to R console
vmap <Space> <Plug>RDSendSelection
nmap <Space> <Plug>RDSendLine
  1. Academia doesn’t pay much. See this for more on the subject. 

4a8c42be9d9bb84116106e637cd5df4b26fdc690
recently
victrola coffee roasters, seattle

Code

I made some good improvements to my journaling script, caplog. So far, like its half-sibling t, this command-line utility seems to be sticking, which makes me happy because I don’t have to worry about anyone acquiring it and forcing me to find another home for all my entries. It’s simple and it delights me.

Reading

Articles

Books

e503648ab4fb76267c4acf6dee3fd363e2626369
Academia's troubles

Coast

Someone asked me whether studying cognitive science changed my view of humanity.1

I had to think about it for a while, and my answer — which was only mildly surprising to me — was that instead of changing my views about humanity, it mostly changed my views about science and research. Specifically, that it made me a lot more skeptical whenever anyone claims that “science”, “research”, or studies have shown anything, say, about humanity.

I left academia for two reasons: first is that academia2 is extremely competitive; the jobs are few, the applicants are many3, and I didn’t want it as much as I saw other people did. Second is that soon after I started graduate school I started to become very disillusioned with the field, its practices, and its incentives. Eventually I became so cynical about a career that I hadn’t even started yet that I knew I should never try to start it in the first place.


Academia has a serious problem wherein it runs on a system of incentives that rewards bad scientists and pushes good scientists out. At risk of weirding you out by commenting on my own writing, that is a remarkable statement. I just told you that academia rewards bad scientists.

Most of this is not news. Every once in a while the BBC or The Economist will talk about the replication crisis and bad incentives in science, or the file-drawer problem, or p-hacking. But I think the enormity of the problem escapes the majority of people.

Here’s what I think you should do. Find a slow weekend morning or afternoon, make yourself a pot of coffee, and spend an hour or two reading Retraction Watch and Andrew Gelman’s site. I used to subscribe to Retraction Watch’s RSS feed but ended up unsubscribing because it was too prolific and I couldn’t keep up. Things are so busy over there that they publish a weekly weekend reads post that you couldn’t possibly finish reading in a weekend unless you had absolutely no other plans.


Here is the life cycle of an academic. There is very little variation in this cycle:

Undergraduate degree (do a thesis or something)
➝ Grad school, do a Master’s
➝ More grad school, do a PhD
➝ Almost definitely a Post Doctoral fellowship4

You remain in the Post Doc holding pattern until you find a job in a university or college. The dream is to land a tenure track position in a research-intensive institution. The reality is people are increasingly taking lesser and lesser positions because the demand for those dream tenure track jobs far outpaces the supply.

Landing a job, especially a good one (the tenure track and get-to-do-some-research-and-not-just-teach kind) is now determined by one factor and one factor only: publications.5 Publish or perish is not a joke, it is the law of the land. Most important is your past publications count and the likelihood that you will be as productive if not more so in the future. Those are also second and third most important. Fourth most important is the prestige of the journals where you get published. Are you publishing in Psych Science or in Frontiers? Makes some difference. I don’t know if anyone actually reads your papers, or whether the quality of your writing, methodology, or, you know, science, factors heavily.

Here is a list of things that, again, with possibly very few exceptions, mean absolutely nothing for your prospects at getting a job:

  1. You champion open science.
  2. You write blog posts about experiments that haven’t worked out, or interesting statistical issues or practices.
  3. You contribute to open source projects, or statistical or data visualization packages.
  4. You are an active mentor and are generous with your time with students or peers.

If there isn’t a publication coming out of it, it doesn’t matter.

It’s not hard to see what a system like that does to the quality of the scientific process. Science is misunderstood by many; it’s not certain, experiments require care, results are not guaranteed, and theories are sooner or later wrong. It takes time to do something right, and you can still end up with a null result; and no amount of hard work could have made a positive finding more likely. But you want a job, so you will do everything you can to end up with a positive result anyway. You will choose topics that are in fashion, you will try to choose easy experiments that can be published regardless of the result6, and yes, you will operate under the constant pressure to make your t-test or ANOVA give you a p-value less than .05, and you might justify bending the rules of statistics to get it.

The better researchers, the ones who prioritize good theory and well-designed experiments and analyses are at a disadvantage. They will not try to publish shoddy studies and will not squeeze a result out of data where none exists. They will want to run the experiments that will help the field choose between competing theories and make progress instead of running the experiment that will produce the 35th uninformative but curious interaction and publish that instead. And for those ideals, they will pay.

I am not a very social person and I never created a big network of academics when I was a student, and yet I personally know several incredible researchers who ejected out of academia as they saw their academic career prospects shrink and ended up in industry; where they are valued and get paid way more for their skills than in the field where they would prefer to be. What a tragedy.


In case you decide to not visit Andrew Gelman’s site – your loss, really – I’ve plucked out an example for you.

The absolute minimum background information you need to know is that psychology, especially social psychology, has been going through a replication crisis in which many popular and thought-to-be-bulletproof findings like ego depletion and power pose do not replicate.

The news and internet have not been kind to psychology during this tumultuous time, nor should they have been.

Susan Fiske, a social psychologist and past president of the Association for Psychological Science, wrote an article titled Mob Rule or Wisdom of Crowds? (PDF download) in which she writes one of the most ill-advised opinion pieces I’ve ever seen by an academic. In it, she – sigh, there is no other way to say this – rants and rails against “online vigilantes” and “self-appointed data police” “volunteering critiques of such personal ferocity and relentless frequency that they resemble a denial-of-service attack that crashed a website by sheer volume of traffic”.

I don’t know what the “website” is supposed to be here, but the idea that academics are suffering a denial of service attack from online critics is laughable. I know of few other institutions that live in their own protected bubble as the psychology department. The whole article is a shame and I am embarrassed on her behalf.

Even though this article was invited by the APS’s Observer, it seems the reaction was so negative that it was never published, and you might find it tricky to find a copy online. The link above gives you the PDF hosted on my own site, plus alternate link 1 and alternate link 2.

The Observer posted an unsurprisingly spineless comment on the issue, including this amazing final paragraph:

Those wishing to share their opinions on this particular matter are invited to submit comments in the space below. Alternatively, letters can be sent to apsobserver@psychologicalscience.org. We ask that your comments include your full name and affiliation.

Yes, please include your affiliation, lest we forget for a moment to judge the value and validity of your comment according to the authority of whether you’re a professor or just a normie.

In one of Andrew Gelman’s comments on the issue, aptly titled “What has happened down here is the winds have changed”, he writes:

In her article that was my excuse to write this long post, Fiske expresses concerns for the careers of her friends, careers that may have been damaged by public airing of their research mistakes. Just remember that, for each of these people, there may well be three other young researchers who were doing careful, serious work but then didn’t get picked for a plum job or promotion because it was too hard to compete with other candidates who did sloppy but flashy work that got published in Psych Science or PPNAS. It goes both ways.

I couldn’t have said it better.


Back to the original question (which I will rephrase to maintain the flow of the story I’m telling here): how has studying cognitive psychology changed my view on things?

For one, I find myself in the uncomfortable position of seeming like a contrarian to two opposing groups: authoritarian conspiracy theorists who assume all scientists are malicious liars with an agenda7, and the intellectual, educated but non-scientific class that has confused the platonic concept of “science” with the practice of scientific inquiry, and therefor defends anything with the smell of science as unquestionable and unassailable. It’s much easier, in fact, to deal with the first group. It’s the second one that depresses me.

Call them what you like, the intellectuals, the elite, the educated class, whatever, I’m not necessarily a fan of any of those terms. They are relatively affluent, internet-savvy people, who, in their attempt to fight back against anti-reason trends in the West (often socially conservative right-wing groups, although by no means all), have grossly overcorrected and now defend any output of human research as sacrosanct truth beyond reproach.

I won’t mince words, this is worship of authority. People confuse the day to day practice with the scientific method itself, and treat the results of the practice as though it was perfect and the output guaranteed by the sanctity of the lab coat and the scatterplot.

In another response to the Susan Fiske article titled “Weapons of math destruction”, NeuroAnaTody writes:

Science is moving forward so quickly that I don’t even think it’s necessary to point out ways in which the article is wrong. I will instead list a some elements of the scientific revolution that trouble me, even though I consider myself a proud (if quite junior) member of the data police.

  1. Belief in published results. I have so little of it left.
  2. Belief in the role of empirical research. Getting to otherwise hidden truths was our thing, the critical point of departure from philosophy.
  3. Belief in the scientific method. I was taught there is such a thing. Now it seems every subfield would have been better off developing its own methods, fitted to its own questions and data.
  4. Belief in statistics. I was taught this is the way to impartial truths. Now I’m a p-value skeptic.
  5. Belief in the academic system. It incentivizes competition in prolifically creating polished narratives out of messy data.

Emphasis mine, because point 1 is the headline for me. Belief in published results, I have so little of it left. That is how studying cognitive psychology has changed my view on things. Whenever I hear of a study that “showed something”, and especially if it’s in the field of psychology, my assumption is that it’s spurious.

So what am I telling you? Science is permanently broken and we are left rudderless in a sea of claims and counter-claims?

No, that would be confusing the practice of science with the scientific process as it should be, the same mistake I think the study-worshipper makes. The scientific method is still the best way we have to approach the truth about the world, we just need to set up the incentives to encourage following it better.

In the meantime, I think you should be extremely skeptical of everything you hear, which is is an uncomfortable position but is not an option at this point. The “study” goes through many stages on its way from having touched the truth and transformed into data, to your eyes. It has gone through an experimental design (done by a human), data collection (done by a different human), analysis (possibly done by a third different human), write-up (one or more humans), review (I will stop mentioning that things are done by humans), and interpretation by a journalist or reader.

In all of these stages, a human makes a judgement call to the best of their abilities, and as all other humans, they operate under pressures and incentives. Speaking of incentives, I am also telling you that academia is not the world of enlightened philosopher kings and queens operating outside the realm of dirty wants and desires the rest of us live in. Academics operate within a terrible, broken system of incentives, and you must keep that in mind whenever you’re consuming their research.

The other message I want to leave you with is that academia is broken and I don’t see it being fixed any time soon. It won’t be fixed until academics are evaluated based on more than their number of publications. It won’t be fixed until hiring committees stop looking at how many papers you’ve published and start looking at the quality of your contribution to knowledge. Yes, it’s much harder to decide whether you created knowledge and contributed to theory than it is to look at your impact factor, but that’s what has to happen. That’s it.

  1. Someone else pointed out that it would be difficult for me to answer that question because I didn’t know an alternative life where I didn’t study cognitive science and what my view of humanity would be in that world. But the question is still valid because I can compare to before I started studying cognitive science, or reflect on how my views changed during the study. 

  2. My personal academic experience is in psychology, but the points I make in this post generalize to all disciplines as far as I know. 

  3. Many. The ratio is quite bad. You shouldn’t necessarily trust the numbers from any of those articles, but you can conclude that the picture is bleak for anyone who is in the crème de la crème of their field, and hopeless for everyone else. 

  4. A Post Doc is not a student. They are an employee who is often an independent researcher in a lab and get paid way too little. 

  5. There might be exceptions to this, but they are exceptional exceptions. 

  6. I have personally received this piece of advice, explicitly, more times than I can count. 

  7. I think scientists do have an agenda that we ought to acknowledge more. They are, after all, people. 

a697ba280d7ff0c51429f7cef89556ce75330998
Archive