Recently
Somewhere in northern Washington

February was a month with a lot of travel, which I like, and a lot of Big Life Projects, which I don’t like as much.

Reading

Books

Gironimo!: Riding the Very Terrible 1914 Tour of Italy by Tim Moore.

I read this book while traveling, and I think that was a great context for it. Being into cycling would help you extract 100% of the fun and storytelling in it, but it's worth reading even if you're not. The book's premise is easy to get from the title and a quick search, so I won't go into it here. What you probably don't realize about it until you start reading is how funny this book is. I would frequently start laughing so hard that I'd have to put the book down and have my traveling partner tell me that I'm making a scene.

I think the last thing I read that made me laugh so hard was This World of Ours (PDF) by James Mickens, an associate professor at Harvard who writes about security. Yes, security, and trust me, you owe it to yourself to read it. Here’s the first line from the article, just to give you a taste:

Sometimes, when I check my work email, I'll find a message that says “Talk Announcement: Vertex-based Elliptic Cryptography on N-way Bojangle Spaces.” I'll look at the abstract for the talk, and it will say something like this: “It is well-known that five-way secret sharing has been illegal since the Protestant Reformation [Luther1517]. However, using recent advances in polynomial-time Bojangle projections, we demonstrate how a set of peers who are frenemies can exchange up to five snide remarks that are robust to Bojangle-chosen plaintext attacks.”

and another favorite part:

Web-of-trust cryptosystems also result in the generation of emails with incredibly short bodies (e.g., “R U gonna be at the gym 2nite?!?!?!?”) and multi- kilobyte PGP key attachments, leading to a packet framing overhead of 98.5%. PGP enthusiasts are like your friend with the ethno-literature degree whose multi-paragraph email signature has fourteen Buddhist quotes about wisdom and mankind's relationship to trees. It's like, I GET IT. You care deeply about the things that you care about. Please leave me alone so that I can ponder the inevitability of death.

Anyway, I digress.

Bringing Columbia Home: The Untold Story of a Lost Shuttle and Her Crew by Michael D. Leinbach, Jonathan H. Ward.

This book is a detailed description of the efforts involved in searching for the crew and shuttle remains after Columbia disintegrated on reentry on Saturday, February 1, 2003. Thousands of people spent months walking every reachable square meter of a debris field that covered three states looking for remains of the crew and every piece of Columbia they could find. My interest in the story of Columbia and its demise is more than superficial. Short of the complete CAIB report on the accident, I've read anything and everything I could find about it. This book is a good read if you have an above average interest in what happened. In the beginning it talks a little bit about the day of the accident, and in the end it talks about the investigation into the root cause of the crash, but the majority of it is dedicated to detailing the logistics of bringing thousands of people together into a methodical sweep of an unknown but huge area of the United States. A spaceship broke up at an altitude of 60km, traveling at close to twenty times the speed of sound, and still the search crews found the remains of all seven crew members, and enough parts of the vehicle to figure out what happened.

Listening

The Commander Thinks Aloud from the album Ultimate by The Long Winters

The Commander Thinks Aloud is one of my favorite songs of all time. It’s written from the perspective of the commander of a space shuttle, specifically the shuttle Columbia, and specifically during reentry on the day of the accident. It’s probably obvious why that was a frequent listen for me this month.

It’s a truly beautiful piece of music. It’s one of those songs that makes you think “how the hell did they think this? how did they create it? how did they get so much of it so right?”

John Roderick from The Long Winters was interviewed about the song on episode 28 of the Song Exploder podcast. It’s a fascinating 20 minutes.

ATLiens by OutKast

Strangely titled album, but it’s really good old school rap.

6ef193eed8a2aee522d9da525accef47baced585
I am the captain, this is my log.

This is a post about my most successful attempt at consistent personal journaling to date. And it’s about caplog, the little can-do utility I wrote to make it easier to journal brief thoughts and longer entries.


There exists a certain class of activities. For activities in this class, people often confuse liking doing the activity with liking the idea of doing the activity. Some of the most well known members of that class are include writing, learning/playing music, and exercise. A lot of people feel like they like writing, but they mostly like the idea of writing. They like to imagine writing.

For a long time I thought that my ideas about journaling were confused in that way. I liked the idea of journaling, but after I tried and failed to develop a frequent journaling habit, I started wondering whether I liked journaling itself.


My top two failed attempts at journaling were:

  • Pen and paper: A nice notebook and a nice pen1.
  • Apps: Momento and Day One.

Pen and paper

Pros

  • Handwriting. I am a pen nerd and I love writing on paper. There is a quality to pen and paper writing that can’t be matched by any text editor, not because text editors can’t be great, they’re just not the same kind of thing.
  • Security. No risk of accounts and cloud storage being hacked. Personal and private thoughts are more secure on paper than in any ““cloud””.
  • Durability. This is counter to intuition: a paper journal is vulnerable to water, fire, and other kinds of fatal damage. But software is still a bigger gamble. The expected lifetime of a paper journal is longer than any specific app (although maybe shorter than a more open and portable solution like a bunch of text files).

Cons

  • Search. Optical character recognition wasn’t as good back when I was trying pen and paper journaling as it is today, but even the best end-user OCR software today probably couldn’t read quick or suboptimal handwriting. Even if it could, that means I would have to OCR every page of my journal, an additional task requiring time and effort, and the more effort journaling needs, the less likely you’re doing to do it.
  • Availability. You have to carry the journal with you at all times. If you don’t have the journal, you can’t write your thoughts down. Typing them on your phone to transcribe later is a nice thought but won’t happen. And some thoughts need to be written down quickly, otherwise you lose the moment.

The last con was a fatal one. My pen and paper journaling lasted a long time, but happened very infrequently. My entries were very long and I never went back to read any of them.

Apps

Pros

  • Availability. Apps are on your phone, and you always have that with you.
  • Rich content. Most apps support more than plain text entries. During my Momento and Day One phases, I made a lot of photo entries, which was nice. Some memories are much better described by a photo than any number of words.

Cons

  • Durability and Portability. Apps don’t live very long. Personal journaling is supposed to be a long-term habit, and the mortality rate of mobile apps is too high for the reliability needs of this activity. I could be willing to pay $$ or even $$$, but if the developer doesn’t get enough people who are just as willing, there’s nothing I can do about it. Momento offered an export function that dumped your journal entries and photos into an archive, but then what? I have that archive now, but I don’t think I ever found a way to get it into Day One.
  • Security. If the app requires you to sync your entries to their cloud, well then… sigh.

I think I moved off of Momento when the app was buggy on a new iOS version and the developer(s) were late to update it. I say I think because I’m not sure if I’m remembering that correctly.2 When Day One’s developers released Day One 2 and made their sync service the only option for syncing, I quit the app.3

caplog

I got the idea for caplog from t, the task manager I’ve been using for years. t is a simple Python script that stores tasks in a plain text file. I added a couple of features in my fork, like the -t switch for tagging tasks with @today and parsing of @{date} syntax to tag the task with @today when the date matches the current day. But it works for me because 1) I can use it from the terminal, my natural habitat, and 2) storing tasks in a plain text file means I can add tasks to the file even if I don’t have access to my terminal.

caplog started as a simple exercise in learning how to store stuff in a SQLite database with Python. First I got it to work doing only the simplest task: put an entry with a timestamp into the table, then slowly added features.

Here’s where it is today:

$ caplog
No log file found. Creating file...
New log file created at /Users/sherif/caplog.db

$ caplog I\'m finally writing my personal journaling post. It only took me 4 months.
$ caplog
┌──────────────────┬──────────────────────────────────────────────────────────┐
│ time             │ entry                                                    │
├──────────────────┼──────────────────────────────────────────────────────────┤
│ 2018-02-28 00:03 │ I'm finally writing my personal journaling post. It only │
│                  │ took me 4 months.                                        │
└──────────────────┴──────────────────────────────────────────────────────────┘
$ caplog -p 3 days ago
Logging an entry dated: February 25 2018 00:11
# in vim
Today I will at least start the draft of the personal journaling post.
# save and exit
$ caplog
┌──────────────────┬──────────────────────────────────────────────────────────┐
│ time             │ entry                                                    │
├──────────────────┼──────────────────────────────────────────────────────────┤
│ 2018-02-25 00:11 │ Today I will at least start the draft of the personal    │
│                  │ journaling post.                                         │
│ 2018-02-28 00:03 │ I'm finally writing my personal journaling post. It only │
│                  │ took me 4 months.                                        │
└──────────────────┴──────────────────────────────────────────────────────────┘

caplog has a --random flag to show me a random entry from my journal. This is important because there is little point in writing personal notes if I’m never going to read them in the future. I run caplog -r more often than you’d think.

The --past flag in the demo above is how I write most of my entries. Sometimes when I’m thinking about my day I remember something that I want to write about, and I try to enter it with the closest timestamp to when I think it actually happened. Using neovim for text entry was something I only added recently, and it’s amazing!

Another favorite and recent feature: caplog has a --batch flag. You can pass caplog --batch a path and it will look in that folder for text files that have a date and time in the first line, and add the rest of the file as an entry for that date and time. The --batch flag is how I use the Drafts app to add caplog entries from my phone.

I write an entry,

run a “New caplog entry” action

which is defined like this:

And on my home computer I have a scheduled task that runs caplog --batch ~/Dropbox/caplog_inbox and adds all the entries there to my journal. It all seems like a big hack, but it works and it makes me smile and that’s all that matters!

Final notes

I am very happy with this setup! This is surprising to me because I had given up on finding a journaling solution that would work well enough on the long term.

Tools like t and caplog working so well for me demonstrate that I am picky and sometimes have preferences that can only be met if I write my own software. I’m fine with that. It’s why I love computers so much.

  1. I do love me a good pen

  2. I should have journaled that. 

  3. To this day I can’t think of a full good faith argument for why they wouldn’t let you use Dropbox instead. 

a4cfe78e7c7844df2813b7cc3356d7f206de9016
Recently
New light, Seattle

Code

To get started on some of the themes I have in mind for this year, I started learning and writing some Haskell. It’s not like the languages I’m used to writing, and it’s fun how it bends my mind and turns it inside out.

I’m solving Project Euler problems using Haskell, and I’ve already finished four of them. Four whole problems. There are currently 619 problems in the project.

Reading

Books

  • Conquest of Happiness by Bertrand Russell.

    Russell’s writing is the simplest, the most straightforward I’ve ever read. Published in 1930, but it might as well have been written two years ago. The problems are the same, and the things we get unhappy about are the same. It might not tell you much you don’t already know, but there’s a difference between what you know that you know, and what you know.

Listening

Trespassers William - Different Stars (2002)

If you pick only one album to try, pick this one.

Different Stars

“Soon It Will be Cold Enough” by Emancipator (2008)

Soon It Will be Cold Enough

“Thao & Mirah” by Thao & Mirah (2011)

Thao & Mirah

“Like the Linen” by “Thao Nguyen” (2005)

Like The Linen

“Art:Work” by ANIMA! (2017)

ANIMA

“ALONE OST” by Rob Allison (2014)

The soundtrack from the game Alone by Laser Dog. Alone is one of the first games by Laser Dog that I played, but I like all of their games. Each game is different from the rest, but they all share a recognizable artistic happy core.

ALONE

1489327bb42ffc239223ea291b093dd3fe012453
'Take no one's word for it' in 2017

Brett Terpstra wrote in a recent post:

I’ve had a few really good coding days in a row here, which has meant not as much blogging this week.

Me, I’ve had a few really good coding months in a row here. Which has meant not as much blogging this year.

In my review of 2016, I was happy about progress and hoping to write more:

I’m moving to a new country and starting in a new research scientist role in 2017, and one way or another I think that will affect my writing here. What I hope will happen is that I’ll be able to write more about science and data as I learn more things faster in my new position.

It didn’t work out this way. I’ve learned and done more over the past 12 months than any other two to three years combined. But all that work wasn’t open source, and was for a team for which confidentiality matters, which means I couldn’t write about any of it. It also left me with much less time for writing than I used to have.

In 2017 I did a lot more coding, a lot more data analysis and machine learning, a lot more scripting and automating, a lot less bike riding, a lot less listening to music, and moderately less reading. It was a year well-spent, but also a year of strong tradeoffs.

So here is Take no one’s word for it in 2016:

Posts by month and year

Total posts by year

Words by month and year

Total words by year

Future

To quote again from my review of 2016:

The knock on new year’s resolutions is that they encourage you to wait until a seemingly arbitrary moment in time before you make a big change or do something to make your life better. Another knock is that this encourages you to attempt large changes instead of piecemeal changes, which increases the amount of discipline required for success, and therefore increases the chances of failure. Larger changes would happen less frequently, and that makes error-correction harder.

I think there is truth in there, but as with a lot of things people criticize today, the criticism loses a lot of nuance or selectivity and becomes absolute. You shouldn’t wait until new year’s to make your life better, but setting checkpoints for retrospectives and projections at regular intervals is useful. New year’s is arbitrary, but no more arbitrary than any other time or date if you don’t have better reasons for them. Just make sure you’re not using it as an excuse to procrastinate.

I have my personal plans and goals for 2018, but I am wary of making them too specific. I couldn’t have planned for most of 2017 (I wrote a lot less than I thought I was going to, for one), and even though I expect 2018 to be less eventful, I can’t set my tradeoffs for a whole year. I do have a theme in mind, though, and I will try to apply this theme by planning one month at a time.

6b9e6588c1f996453c1754ffd62f865f466559ea
My early computing

I remember the exact moment I discovered that you could “click” a highlighted button by pressing the spacebar.

The relative of a friend of the family was sitting in front of our computer, installing Visual Studio 6. I had been waiting for this for a long time.


I want to write about Visual Basic, but I don’t see a path to that unless I first write about DOS. But I don’t know how to get to DOS without writing about the first computer I – my family – ever owned.

Many people have some form of old Apple or Macintosh machine as their first computer, but mine was a no-name grey Pentium tower that ran two versions of Windows: XP and 2000.

Today I can sit back and pontificate about how Mac OS X was a revelation when I saw it for the first time in its Leopard skin, and how ever since I got a white Intel MacBook in 2007 I can’t use Windows without getting a metallic aftertaste in my soul. But back in the early 2000s, I was in love with the computer I had in front of me, and that computer ran Windows.

I spent hours in front of that machine. There was no one to teach me, and we didn’t have internet, so I just clicked on everything I could click on and tried to figure out what the hell it did.


I have a big extended family, and all of them belong to one occupation, except one of my uncles, who is in IT. He had a side business putting computers together and selling them, and always had the coolest looking things lying around; screwdriver sets, SATA cables, empty motherboard cases, empty anti-static bags. I wanted to be around that stuff all the time.

He had a computer than ran Windows 95, and he had games which he would let me play sometimes, but he didn’t trust me to click around his Windows environment. So he taught me how to boot into DOS and launch the games from the command line.1


My uncle had a black Visual Basic 3 book that he didn’t need or use because it wasn’t 1993 anymore, so I took it.2 I read the book and understood that you could drag tools onto a window, give those tools names, then write some code to define what happens when you interacted with them. I read about this, but where could I actually do it? This was a Visual Basic 3 book!

A screenshot of my first programming world. Courtesy The Register

I forget how I discovered that Microsoft Office had some scripting section that not only let you write Visual Basic, but create GUI programs! That was so awesome, and I did a lot of my early stumbling and learning in it.3 Then at some point an opportunity presented itself and we got a relative of a family friend who owed us a favor to install Visual Studio. He came over, opened his CD case, popped the Visual Studio CD into the disc tray, and went through the installation wizard. I could see the dialog windows as he went through them, and I realized he was advancing not by clicking the buttons, but by hitting the spacebar! Whoa….


Visual Basic gets made fun of a lot, I think, and I haven’t spent a lot of time trying to figure out whether there is merit to the mockery. But I do know that it was a very easy language to learn for someone who only had an ancient book to learn from. The only comparable development environment I know is Xcode, and it’s way more complicated to handle a multi-view Xcode project than it is to create a multi-form Visual Basic one.


I remembered learning that the spacebar clicked the highlighted button because I’ve been thinking about why I like computing so much. Why did I spend so much time running DOS commands I read about online? Why do I find intrinsic value in creating tools and automatic things? Of course I’m not the only one, but we could all have different reasons and I can only try to figure out mine. I still don’t have a good answer, but it connects all the way back to that spacebar.

  1. It’s funny to look back at that. A child is not trusted to not break things, so we steer them away from Windows and unleash them on the command line instead! 

  2. That was a great book. I spent a lot of time trying to find a photo of the book or its cover, but no luck. 

  3. I also spent some time trying to find what that Office Visual Basic tooling was called. My best guess is Visual Basic for Applications although this is not based on my memory. 

d81394c60af59d9ecdef8f13abadb45f948907ce
Mapping my Kitchener-Waterloo bike rides
My rides, with darker shades indicating higher heart rates

I rode a lot when I lived in Kitchener-Waterloo. The weather sucked for at least half the year, but I had time and I love being on the bike. I recorded almost all my rides with a Garmin 500. Without any extra sensors, the Garmin records GPS coordinates, altitude, and speed. I put sensors on my bike that allowed it to record cadence (pedal turns per minute) and more accurate speed data (by relying on wheel rotations instead of movement). I also wore a heartrate monitor most of the time. Each ride ends up as a .fit file that I copied to my computer and uploaded to Strava.

Tom MacWright wrote a post about creating a map of his runs in Washington D.C., and I wanted to try doing the same with my cycling data.

The fit R package by Alex Cooper does the hard work of parsing each .fit file to give a clean dataframe with cadence, distance, speed, altitude, and heartrate metrics for each timestamp.1 Thanks to Alex, all I have to do is merge the data from all the files while excluding ones that don’t have relevant data.2

First, some library loading and setup.

library(dplyr)
library(fit)
library(ggmap)
library(magrittr)
library(svglite)
library(viridis)

Aside from the fit package that parses the files, I use the ggmap package to overlay the rides on top of the town’s Google Map roads.

There are some limitations to using Google Maps. You have to be careful about hitting a rate limit while developing and rerunning the script, Google watermarks the maps with their logo at the bottom right and left corners, and – and this is the worst – zoom settings are way too coarse. The maps below are at zoom level 10, which is too far and has a lot of unused map area. But zoom level 11 is too close and cuts off many of the routes. I could technically get around both issues by cropping to get rid of empty space and remove the logos, but I suspect that violates some usage policy. Finally, higher resolution maps (i.e, scale = 4) are reserved for business users only.

That said, the ggmap R package is convenient and easy to use. I tried OpenStreetMap and it looks promising, but I still haven’t figured out if I can get the kind of map I want out of it, and how to do it.

Now we read the ride files from _map directory and discard any that don’t have the right shape.3

ride_files <- dir('_map/')

check_vars <- function(fit_file) {
    ride <- read.fit(file.path('_map', fit_file))
    # checking number of columns is a 'bad smell' in the future-proofness of this code
    if (ncol(ride$record) == 9) {
        return(ride$record)
    } else {
        return(NA)
    }
}

rides <- do.call('rbind', lapply(ride_files, function(x) { check_vars(x) }))

points <- rides %>%
    na.omit() %>%
    filter(heart_rate < 190)

centers <- c(mean(points$position_long) + .04, mean(points$position_lat) - .08)

With all the files parsed into one dataframe, and the longitude and latitude centers of the map calculated, we can get the base layer Google Map of the town.

map <- get_googlemap(center = centers,
                     zoom = 10,
                     scale = 2,
                     color = 'bw',
                     style = c(feature = "all", element = "labels", visibility = "off"),
                     maptype = c('roadmap'),
                     messaging = FALSE)

Let’s make some maps! Going forward, color/shade variation indicates heartrate.

Google Maps with color heatmap overlay:

ggmap(map, extent = 'device') +
    geom_path(aes(x = position_long, y = position_lat, color = heart_rate), data = points, size = .7) +
    scale_color_viridis(option = 'inferno', guide = FALSE) +
    theme_void()

plot of chunk gmap-color

Google Maps with greyscale heatmap overlay:

ggmap(map, extent = 'device') +
    geom_path(aes(x = position_long, y = position_lat, color = heart_rate), data = points, size = .7) +
    coord_map("mercator") +
    scale_colour_gradient(low = "white", high = "black", guide = FALSE) +
    theme_void()

plot of chunk gmap-grey

Naked routes with color heatmap:

ggplot(points, aes(x = position_long, y = position_lat, color = heart_rate)) +
    geom_path() +
    coord_map("mercator") +
    scale_color_viridis(option = 'inferno', alpha = .2, guide = FALSE) +
    theme_void()

plot of chunk nomap-color

Naked routes with greyscale heatmap:

ggplot(points, aes(x = position_long, y = position_lat, color = heart_rate)) +
    geom_path() +
    coord_map("mercator") +
    scale_colour_gradient(low = "white", high = "black", guide = FALSE) +
    theme_void()

plot of chunk nomap-grey

… + minimal coordinate information

ggplot(points, aes(x = position_long, y = position_lat, color = heart_rate)) +
    geom_path() +
    coord_map("mercator") +
    scale_color_viridis(option = 'inferno', alpha = .2, guide = FALSE) +
    xlab('long') +
    ylab('lat') +
    theme_minimal() +
    theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank())

plot of chunk nomap-coords

That was fun!

Next steps: save my favorite as an svg file and get a nice-looking print.

  1. I assume it would also give power data if I had it. 

  2. I record my training on resistance trainers too, but that data is irrelevant because it’s all stationary. 

  3. As I mention in the code comment, validating a data file by counting columns is a bad smell. I think it hurts the generalizability and reusability of the code. That said, this post idled in draft status for way too long, and given that I don’t foresee myself doing more training rides in Kitchener-Waterloo again, this code is largely custom-written for data that is fully collected. Sometimes you do things the ideal way, and sometimes you ship. 

01fdee9f1c63169b545798aae19f760d7262961d
What I wish I had known

I recently wrote a blog post for the Britt Anderson Group about personal development in academia and what one could do with a psychology degree and coding skills.

I took two courses, An Introduction to Methods in Computational Neuroscience and Human Neuroanatomy and Neuropathology with Dr. Britt Anderson1, both of which are among the very best formal education experiences I’ve ever had2. I was also lucky enough to publish a paper with him. Very few people have had as much of an impact and influence on my work, personality, and intellect as Dr. Anderson has. I’m fortunate to have met him during my academic career, and was honored to write a post for his lab’s site. He was my mentor during graduate school without even knowing it.

The post is titled “What I Wish I Had Known: Advice from a Former Psychology Graduate Student” and you can read it in full here.

That said, I wanted to quote the first and second-last paragraphs here, because they fit Take no one’s word for it pretty nicely.

I wanted to write about advice I would give my past self while completing my psychology degrees, but before that, I want to begin with a caveat: each person’s journey is different, and the more you can bring yourself to not feel the pressure to follow any person’s particular advice or prescribed steps, the easier it might be to find your path to what comes next. That was hard for me at first, but you get better at it the more you try.

[…]

This post included a lot of dry thoughts and recommendations, so let me end on a different note: the best thing you can do for yourself, no matter your goals or interests, is to realize that you can learn anything and get really good at whatever you set your mind to. It’s never too late, and the lack of formal training is no deal breaker, and might in fact make things easier. The hardest part is to start, and once you do, the second hardest thing is to keep a schedule of learning and practice.

[…]

  1. MD. A real doctor. 

  2. In the first I implemented the Hodgkin-Huxley model of the neuron in C and Excel (yeah…), and in the second I got to handle and examine brains, skulls, and human cadavers donated to scientific research. That last one was a profound experience. 

b8c73654b968f831c25cca4c4466f9461334495b
Nvim-R for people who love R, vim, and keyboards

For love or money, I write R code almost every day.

Back when most of the R code I wrote was for personal projects or academic research, I worked in RStudio. That’s when I wrote R for love.1

Once I started writing more R code for money, I wanted to find alternatives to RStudio, and unfortunately, there aren’t any. RStudio is without competition in the GUI side of the market.

I don’t remember how I found Nvim-R, but I did and now I can’t imagine working without it. It has become my favorite and default way to write R, no matter the size or the complexity of the script. I use Nvim-R with neovim, but it works with vim too.

Nvim-R screenshot

Install it with your package manager of choice; I use Vundle:

Plugin 'jalvesaq/Nvim-R'

Once you have a .R file in the buffer, you can invoke Nvim-R with the default shortcut <LocalLeader>rf. Nvim-R will open a vim shell window that runs the R console. The configuration I ended up with – see screenshot above and vim configuration code below – puts the source editor in the top left, the object browser in the top right, and the console in the bottom. You can set it up so that hitting space bar in normal mode sends lines from the source editor to be executed in the console. Want to send many lines? This function also works with highlighted lines in visual mode.

Speaking of the object browser: Nvim-R’s object browser is similar to RStudio’s. It shows you the R objects in the environment and updates itself as objects are created and removed. Except I like Nvim-R’s object browser better. It assigns different colors to objects of different types, and it will, by default, expand the dataframes to list their columns underneath them (you can turn that off so that you expand the dataframes you want by hitting Enter on their entry).

I rarely read vim plugin documentation because the features and options I need are often general enough to be described in the READMEs, but if you’re considering using Nvim-R, I highly recommend reading its help/docs. They’re well-written and they’ll help you customize your development environment to your exact specifications: :help Nvim-R.

These are my settings, and they only give a taste of how customizable Nvim-R is:

" press -- to have Nvim-R insert the assignment operator: <-
let R_assign_map = "--"

" set a minimum source editor width
let R_min_editor_width = 80

" make sure the console is at the bottom by making it really wide
let R_rconsole_width = 1000

" show arguments for functions during omnicompletion
let R_show_args = 1

" Don't expand a dataframe to show columns by default
let R_objbr_opendf = 0

" Press the space bar to send lines and selection to R console
vmap <Space> <Plug>RDSendSelection
nmap <Space> <Plug>RDSendLine
  1. Academia doesn’t pay much. See this for more on the subject. 

c4b19f278c80ef84cdae887e579db8d6dadda8e0
Archive