the other blog post i was going to write

It’s always unnerving to realize that your happiness is highly correlated with some particular event, object, person, substance, or thought pattern. Various components of pop culture have led us to believe that happiness is {a warm gun | two kinds of ice cream | high serotonin levels | coca-cola} and so forth, but nobody ever says that happiness is a volatile multidimensional product of quickly-fluctuating vectors along a dozen axes.

For a while I thought that happiness was jumping on a plane with the wrong power adapter and a boarding pass that you aren’t sure how you managed to pay for. I did this last week, and it felt good (long walks, surrealistic jetlag, mint tea, sharing beds with old friends, riesling, cold arms, trampling through abandoned grass) until I got a 103 degree (F) fever on the plane ride home and was decapacitated for two days.

One loveable thing about a non-permanent, non-excruciating-but-still-debilitating sickness is that it pares down expectations until they’re almost manageable. Happiness becomes a two-step process: (1) regain appetite until soup sounds like it would be okay, and (2) obtain soup.

As I got better, I started to realize that happiness, for me, is usually predicated on feeling engaged with the world and thereby interested in outcomes. This may not actually be true, but sometimes plausibility and convenience in the right proportions is better than truth.

Academic systems, at least in the limit of platonic idealism, are all about being engaged with the world, insofar as exploring it makes you feel like you’re more a part of it. Travel is similar. So are urban exploration and reading books and the process of growing up.

This year, for the first time, it all seems incredibly hard. Increasingly often in the past few weeks, I find myself having to make an effort to feel like I’m adding something to the world and vice versa. A stronger and scarier form of this feeling is stagnation.

So, yesterday, I decided to start consciously keeping track of things that make me feel engaged with the world, even if they only work for a moment or two. Hitting the submit button on a blog post is one of them; another is thoughtfully-placed punctuation that ever-so-slightly tilts the tone of a sentence. Yet another: grasping another small morsel of German conjugation.

Re: the sibling topic of optimizing for long-term fulfillment: It seems increasingly debatable to me whether this is all folklore and guesswork outside of well-established health practices (ex: not smoking cigarettes).

don’t forget to secure cookies ppl

Update (5/28/14): Regrettably, most of the stories covering this blog post have been all “OMG EVERYTHING IS BROKEN” rather than “Here’s how to make things better til WordPress rolls out a fix” (which I humbly believe will take a while to *fully* fix, given that their SSL support is so patchy). So, given that most people reading this are probably coming from one of those articles, I think it’s important to start with the actionable items that people can do to mitigate cookie-hijacking attacks on WordPress:

  1. If you’re a developer running your own WordPress install, make sure you set up SSL on all relevant servers and configure WordPress to auth flag cookies as “secure.”
  2. If you’re a WordPress user, don’t be logged into WordPress on an untrusted network, or use a VPN. If you are and you visit a wordpress.com site (which confusingly may not actually have a wordpress.com domain name), your auth cookies are exposed.
  3. [Experimental, probably not recommended] You can manually set the “secure” flag on the WP auth cookies in your browser. There’s no guarantee that this works consistently, since the server can always send a set-cookie that reverts it into an insecure cookie. It may cause some WP functionality to break.
  4. If you suspect that your WP cookie may have been stolen in the past, you can invalidate it by (1) waiting 3 years for it to expire on the server or (2) resetting your wordpress.com password. Note that logging out of WordPress does *not* invalidate the cookie on the server, so someone who stole it can use it even after you’ve logged out. I verified that resetting your WP password does invalidate the old cookie; there may be other ways, but I haven’t found any.

Original post below.

 

While hunting down a bug report for Privacy Badger, I noticed the “wordpress_logged_in” cookie being sent over clear HTTP to a WordPress authentication endpoint (http://r-login.wordpress.com/remote-login.php) on someone’s blog.

uh-oh

uh-oh

Sounds like bad news! As mom always said, you should set the “secure” flag on sensitive cookies so that they’re never sent in plaintext.

To check whether this cookie did anything interesting, I logged out of my wordpress account, copied the wordpress_logged_in cookie into a fresh browser profile, and visited http://wordpress.com in the new browser profile. Yep, I was logged in!

This wouldn’t be so bad if the wordpress_logged_in cookie were invalidated when the original user logged out or logged back in, but it definitely still worked. Does it expire? In 3 years. (Not sure when it gets invalidated on the server side, haven’t waited long enough to know.)

Is this as bad as sending username/password in plaintext? I tried to see if I could reset the original user’s password.

wordpresspassword1

That didn’t work, so I’m assuming WordPress uses the actually-secure cookie (wordpress_sec) for super important operations like password change. Nice job, but . . .

It turns out I could post to the original user’s blog (and create new blog sites on their behalf):

wordpress_postblog

I could see private posts:

wordpress_postsecretblog

I could post comments on other blogs as them:

wordpress_postcomment

I could see their blog stats:

wordpress_stats

And so forth. I couldn’t do some blog administrator tasks that required logging in again with the username/password, but still, not bad for a single cookie.

Moral of the story: don’t visit a WordPress site while logged into your account on an untrusted local network.

Update: Thanks to Andrew Nacin of WordPress for informing me that auth cookies will be invalidated after a session ends in the next WordPress release and that SSL support on WordPress will be improving!

Update (5/26/14): I subsequently found that the insecure cookie could be used to set someone’s 2fac auth device if they hadn’t set it, thereby locking them out of their account. If someone has set up 2fac already, the attacker can still bypass login auth by cookie stealing – the 2fac auth cookie is also sent over plaintext.

Update (5/26/14): A couple people have asked about whether the disclosure timeline below is reasonable, and my response is here.

Disclosure timeline:

Wed, 21 May 2014 16:12:17 PST: Reported issue to security@automattic.com, per the instructions at http://make.wordpress.org/core/handbook/reporting-security-vulnerabilities/#where-do-i-report-security-issues; at this point, the report was mostly out of courtesy, since I figured it had to be obvious to them and many WP users already that the login cookie wasn’t secured (it’s just a simple config setting in WordPress to turn on the secure cookie flag, as I understand it). Received no indication that the email was received.

22 May 2014 16:43: Mentioned the lack of cookie securing publicly. https://twitter.com/bcrypt/status/469624500850802688

22 May 2014 17:39: Received response from Andrew Nacin (not regarding lack of cookie securing but rather that the auth cookie lifetime will soon be that of a regular session cookie). https://twitter.com/nacin/status/469638591614693376

23 May 2014 ~13:00: Discovered two-factor auth issue on accident, reported to both security@automattic.com and security@wordpress.org in reply to original email. I also mentioned it to Dan Goodin since I found the bug while trying to answer a question he had about cookies, but I did not disclose publicly.

25 May 2014 15:20: Received email response from security@automattic.com saying that they were looking into it internally (no mention of timeline). Wrote back to say thanks.

26 May 2014, ~10:00: Ars Technica article about this gets published, which mentioned the 2-fac auth issue. I updated this blog post to reflect that.

26-27 May 2014: Some commenters on the Ars Technica article discover an arguably worse bug than the one that the original article was about: WordPress sends the login form over HTTP. (Even though the form POST is over HTTPS, the local network attacker can modify the target on the HTTP page however he/she wants and then it’s game over.) This wouldn’t be so bad if everyone used a password manager and changed passwords semi-regularly, since most people are likely to login to WordPress through their blog’s admin portal (which is always HTTPS as far as I can tell), except that password reuse is rampant. Robert Graham subsequently published this blog post.

29 May 2014, 5:52: Received reply from WordPress saying they would email me again when fixed.

30 May 2014, 14:51: Andrew Nacin says all issues are supposedly fixed.

How to make a less-leaky Heartbleed bandage

Mashable just put out a nice-looking chart showing “Passwords You Need to Change Right Now” change in light of the recent Heartbleed carnage. However, it has some serious caveats that I wanted to mention:

  1. It’s probably better to be suspicious of companies whose statements are in present-tense (ex: “We have multiple protections” or even “We were not using OpenSSL”). The vulnerability existed since 2011, so even if a service was protected at the time of its disclosure 3 days ago, it could be have been affected at some point long before then. I am also skeptical that every single company on the list successfully made sure that nothing that they’ve used or given sensitive user data to had a vulnerable version of OpenSSL in the last 2 years.
  2. The article neglects to mention that password reuse means you might have to change passwords on several services for every one that was leaked. The same goes for the fact that one can trigger password resets on multiple services by authenticating a single email account.
  3. You should also clear all stored cookies just in case the server hasn’t invalidated them as they should; many sites use persistent CSRF tokens so logging out doesn’t automatically invalidate them. (Heartbleed trivially exposed user cookies.)
  4. Don’t forget to also change API keys if a service hasn’t force-rotated those already.
  5. It remains highly unclear whether any SSL certificates were compromised because of Heartbleed. If so, changing your password isn’t going to help against a MITM who has the SSL private key unless the website has revoked its SSL certificate and you’ve somehow gotten the revocation statement (LOL). This is complicated. Probably best not to worry about it right now because there’s not much you can do, but we all might have to worry about it a whole lot more depending on which way the pendulum swings in the next few days.
  6. Related-to-#5-but-much-easier: clear TLS session resumption data. I think this usually happens automatically when you restart the browser.

Nonetheless, Mashable made a pretty good chart for keeping track of what information companies have made public regarding the Heartbleed fallout.

Zero-bit vulnerabilities?

The other day, I overheard Seth Schoen ask the question, “What is the smallest change you can make to a piece of software to create a serious vulnerability?” We agreed that one bit is generally sufficient; for instance, in x86 assembly, the operations JL and JLE (corresponding to “jump if less than” and “jump if less than or equal to”) differ by one bit, and the difference between the two could very easily cause serious problems via memory corruption or otherwise. As a simple human-understandable example, imagine replacing “<” with “<=” in a bus ticket machine that says: “if ticket_issue_date < today, reject rider; else allow rider.”

At this point, I started feeling one-bit-upsmanship and wondered whether there was such a thing as a zero-bit vulnerability. Obviously, a binary that is “safe” on one machine can be malicious on a different machine (ex: if the second machine has been infected with malware), so let’s require that the software must be non-vulnerable and vulnerable on two machines that start in identical states. For simplicity, let’s also require that both machines are perfectly (read: unrealistically) airgapped, in the sense that there’s no way for them to change state based on input from other computers.

This seems pretty much impossible to me unless we consider vulnerabilities probabilistically generated by environmental noise during code execution. Two examples for illustration:

  1. A program that behaves in an unsafe way if the character “A” is output by a random character generator that uses true hardware randomness (ex: quantum tunneling rates in a semiconductor).
  2. A program that behaves in an unsafe way when there are single-bit flips due to radioactive decay, cosmic ray collisions, background radiation, or other particle interactions in the machine’s hardware. It turns out that these are well-known and have, in some historical cases, caused actual problems. In 2000, Sun reportedly received complaints from 60 clients about an error caused by background radiation that flipped, on average, one bit per processor per year! (In other words, Sun suffers due to sun.)

Which brings up a fun hypothetical question: if you design an SSL library that will always report invalid certificates as valid if ANY one bit in the library is flipped (but behaves correctly in the absence of single-bit flip errors), have you made a zero-bit backdoor?

a short story idea

In the year 2014, a startup in San Francisco builds an iPhone app that successfully cures people of heartbreak, but it requires access to every permission allowed on the operating system, including some that no app has ever requested before. It only costs $2.99 though.

The app becomes hugely popular. The heartbroken protagonist of our story logs into the Apple iStore to download it, but because the Apple iStore doesn’t support HTTP Strict Transport Security yet, an NSA FOXACID server intercepts the HTTP request and injects targeted iPhone malware into the download before Apple’s servers have a chance to respond.

However, the malware was actually targeted for the iPhone of an overseas political dissident. The only reason it reached our protagonist by mistake was because the first SHA-1 collision in recorded history was generated by the tracking cookies that NSA used to target the dissident.

Meanwhile, the protagonist is wondering whether this app is going to work once it finishes installing. He smokes a cigarette and walks along a bridge in the pouring rain. Thousands of miles away, an NSA agent pinpoints his location and dispatches a killer drone from the nearest drone refueling station.

The protagonist is silently assassinated in the dark while the entire scene is caught on camera by a roaming Google Street View car. The NSA realizes this and logs into Google’s servers to delete the images, but not before some people have seen them thanks to CDN server caching.

Nobody really wants to post these pictures, because they’re afraid of getting DMCA takedown notices from Google Maps.

decentralized trustworthiness measures and certificate pinning

On the plane ride from Baltimore to SFO, I started thinking about a naming dilemma described by Zooko. Namely (pun intended): it’s difficult to architect name assignment systems that are simultaneously secure, decentralized, and human meaningful. Wikipedia defines these properties as:

  • Secure: The quality that there is one, unique and specific entity to which the name maps. For instance, domain names are unique because there is just one party able to prove that they are the owner of each domain name.
  • Decentralized: The lack of a centralized authority for determining the meaning of a name. Instead, measures such as a Web of trust are used.
  • Human-meaningful: The quality of meaningfulness and memorability to the users of the naming system. Domain names and nicknaming are naming systems that are highly memorable.

It’s pretty easy to make systems that satisfy two of the three. Tor Hidden Service (.onion) addresses are secure and decentralized but not human-meaningful since they look like random crap. Regular domain names like stripe.com are secure and human-meaningful but not decentralized since they rely on centralized DNS records. Human names are human-meaningful and decentralized but not secure, because multiple people can share the same name (that’s why you can’t just tell the post office to send $1000 to John Smith and expect it to get to the right person).

It’s fun to think of how to take a toy system that covers two edges of Zooko’s triangle and bootstrap it along the third until you get an almost-satisfactory solution to the naming dilemma. Here’s the one I thought of on the plane:

Imagine we live in a world with a special type of top-level domain called .ssl, which people have decided to make because they’re sick of the NSA spying on them all the time. .ssl domains have some special requirements:

  1. All .ssl servers communicate only over SSL connections. Browsers refuse to send any data unencrypted to a .ssl domain.
  2. All .ssl domain names are just the hash of the server’s SSL public key.
  3. The registrars refuse to register a domain name for you unless you show him/her a public key that hashes to that domain name.

This naming system wouldn’t be human-meaningful, because people can’t easily remember URLs like https://2xtsq3ekkxjpfm4l.ssl. On the other hand, it’s secure because the domain names are guaranteed to be unique (except in the overwhemingly-unlikely cases where two keys have the same hash or two servers happen to generate the same keypair). It’s not truly decentralized, because we still use DNS to map domain names to IP addresses, but I argue that DNS isn’t a point of compromise: if a MITM en route to the DNS server sends you to the wrong IP address, your browser refuses to talk to the server at that IP address because it won’t show the right SSL certificate. This is an unavoidable denial-of-service vulnerability, but the benefit is that you detect the MITM attack immediately.

Of course, this assumes we already have a decentralized way to advertise these not-very-memorable domain names. Perhaps they spread by trusted emails, or word-of-mouth, or business cards at hacker cons. But still, the fact that they’re so long and complicated and non-human-meaningful opens up serious phishing vulnerabilities for .ssl domains!

So, we’d like to have petnames for .ssl domains to make them more memorable. Say that the owner of “2xtsq3ekkxjpfm4l.ssl” would like to have the petname “forbes.ssl”; how do we get everyone to agree on and use the petname-to-domain-name mappings? We could store the mappings in a distributed, replicated database and require that every client check several database servers and get consistent answers before resolving a petname to a domain name. But that’s kinda slow, and maybe we’re too cheap to set up enough servers to make this system robust against government MITM attacks.

Here’s a simpler and cheaper solution that doesn’t require any extra servers at all: require that the distance between the hash of the petname and the hash of [server’s public SSL key] + [nonce] is less than some number D 1. The server operator is responsible for finding a nonce that satisfies this inequality; otherwise, clients will refuse to accept the server’s SSL certificate.

1 For purposes of this discussion, it doesn’t really matter how we choose to measure the distance between two hashes, but it should satisfy the following: (1) two hashes that are identical have a distance of 0, and (2) the number of distinct hashes that are at distance N from a hash H0 should grow faster than linearly in N. We can pick Hamming distance, for example.

In other words, the procedure for getting a .ssl domain now looks like this:

  1. Alice wants forbes.ssl. She generates a SSL keypair and mines for a nonce that makes the hash of the public key plus nonce close enough to the hash of “forbes”.
  2. Once Alice does enough work to find a satisfactory nonce, she adds it as an extra field in her SSL certificate. The registrar checks her work and gives her forbes.ssl if the name isn’t already taken and her nonce is valid.
  3. Alice sets up her site. She continues to mine for better nonces, in case she has adversaries who are secretly also mining for nonces in order to do MITM attacks on forbes.ssl in the future (more on this later).

Bob comes along and wants to visit Alice’s site.

  1. Bob goes to https://forbes.ssl in his browser.
  2. His browser sees Alice’s SSL certificate, which has a nonce. Before finishing the SSL handshake, it checks that the distance D1_forbes between the hash of “forbes” and the hash of [SSL public key]+[nonce] is less than Bob’s maximum allowed distance, D1. Otherwise it abandons the handshake and shows Bob a scary warning screen.
  3. If the handshake succeeds, Bob’s browser caches Alice’s SSL certificate and trusts it for some period of time T; if Bob sees a different certificate for Alice within time T, his browser will refuse to accept it, unless Alice has issued a revocation for her cert during that time.
  4. After time T, Bob goes to Alice’s site again. His maximum allowed distance has gone down from D1 to D2 during that time. Luckily, Alice has been mining for better nonces, so D1_forbes is down to D2_forbes. Bob’s browser repeats Step 2 with the new distances and decides whether or not to trust Alice for the next time interval T.

In reality, you probably wouldn’t want to use this system with SSL certs themselves; rather, it’d be better to use the nonces to strengthen trust-on-first-use in a key pinning system like TACK. That is, Alice would mine for a nonce that reduces the distance between the hash of “forbes” and the hash of [TACK Signing Key]+[nonce].

For those unfamiliar with TACK, it’s a system that allows SSL certificates to be pinned to a long-term TACK Signing Key provided by the site operator, which is trusted-on-first-sight and cached for a period of up to 30 days. Trust-on-first-use gets rid of the need to pin to a certificate authority, but it doesn’t prevent a powerful adversary from MITM’ing you every time you visit a site if they can MITM you the first time with a fake TACK Signing Key.

The main usefulness of nonces for TACK Signing Keys is this: it makes broad MITM attacks much more costly. Not only does the MITM have to show you a fake key, but they have to show you one with a valid nonce. If they wanted to do this for every site you visit, keeping in mind that your acceptable distances go down over time, they’d have to continuously mine for hundreds or thousands of domains.

Not impossible, of course, but it’s incrementally harder than just showing you a fake certificate.

Another nice thing about this scheme is that Bob can decide to set different distance thresholds for different types of sites, depending on how “secure” they should be. He can pick a very low distance D_bank for his banking website, because he knows that his bank has a lot of computational resources to mine for a very good nonce. On the other hand, he picks a relatively high distance D_friend for his friend’s homepage, because he knows that his friend’s one-page site doesn’t take any sensitive information.

My intuition says that sites with high security needs (banks, e-commerce, etc.) also tend to have more computational resources for mining, but obviously this isn’t true for sites like Wikileaks or some nonprofits that handle sensitive information liked Planned Parenthood. That’s okay, because volunteers and site users can also mine for nonces! Ex: if Bob finds a better nonce for Alice, he can send it to her so that she has a stronger certificate.

Essentially, this causes proof of trustworthiness to become decentralized: if I start a whistleblower site, I can run a crowd-mining campaign to ask thousands of volunteers around the world to help me get a strong certificate. I win as long as their combined computing power is greater than that of my adversaries.

Of course, that last part isn’t guaranteed. But it’s interesting to think about what would happen either way.

 

Aaron

My co-worker Peter and I were riding the Caltrain from Mozilla to San Francisco a few days ago. A stranger sat down next to us and started talking. When I mentioned that we worked at EFF, his eyes lit up and he said, “Oh! But you guys have won, right?”

Confused, I asked what he meant by that.

He said, “You defeated SOPA and PIPA a couple years ago. So you’ve won.”

We laughed and explained that it didn’t quite work like that. Peter said, “Imagine this: you’re a hero in a comic book. Every time you defeat your nemesis, a new one appears. This happens over and over again. It has to work that way, because you live inside a comic book.”

And so it does. SOPA and PIPA are dead, but now there’s NSA surveillance.

—–

Aaron Swartz died a year ago today. I didn’t know him well at all, but I could tell he believed that he had the power to make the world that he wanted to live in. That’s not something that everyone believes about themselves; in fact, I think very few people live their lives as if it were true.

When Aaron died, I felt like I had to do something. I didn’t understand how to effectively fight for Internet freedom or why governments cared so much about restricting it, but I could see that Aaron’s work had pivotal consequence for the future of human societies. I realized that if the wrong people gained control over the laws of the Internet, ordinary users would quickly lose their right to free speech on the greatest medium of expression that history has ever witnessed.

I didn’t know anything about code or laws or activism a year ago, but Aaron’s death taught me that the fight for Internet freedom is lonely enough that it didn’t matter who I was. One more person, one step forward.

—–

I think SOPA/PIPA was the moment when we, the citizens of the Internet, realized that we could stand up and actually protect ourselves against historically-powerful institutions. As Peter once said, “This was the moment when the Internet had grown up.”

There’s a famous shot of Aaron at a SOPA/PIPA protest, standing in front of a crowd of people and yelling at them, “It’s easy sometimes to feel like you’re powerless, when you come out and march in the streets and nobody hears you. But I’m here to tell you today, you are powerful.”

When the ratio of Congress members supporting SOPA/PIPA to those against it went from 80/31 to 65/101 overnight on January 18, 2012, we started to think that maybe Aaron had a point: if enough people show that they care about something, the government listens and the people win.

Perhaps this strategy doesn’t apply to the fight against mass surveillance, because it’s a bigger and different sort of enemy than copyright. That’s okay. Comic books aren’t interesting without plot twists, I suppose.

(Thanks to Jacobo Nájera for translating this post into Spanish: http://metahumano.org/log/aaron-yan-zhu/.)

On Suicide

I lost four friends and relatives of friends to suicide this past year. I’d prefer it if 2014 were different, and I’ve been trying to think about how to make that happen.

The least I could do is offer myself to anyone who feels alone otherwise: so, if you’re at that point where you’re thinking about hurting yourself, please please please call or write to me. I’d really like that, even if you don’t feel like it would help in any way, even if we’ve never met.

The more difficult thing for me to do, and the one that I’ve been putting off for months, is to write a bit about what it feels like to reach that point. I won’t claim that my experiences are universal in any way, but maybe some parts will resonate with others who’ve gone to similar places.

I would really not like to alarm anyone, so please just take everything here literally. Suicide is, unfortunately, stigmatized in such a way that it’s extremely difficult to write about non-anonymously for fear of scaring friends. That seems like the start of a vicious cycle.

I’ve never felt very attached to life, even when things are going great (as they are now). I have a theory that human beings naturally vary in how much they value their own lives, just like they vary in how much they value having things like fancy cars. People who are a couple standard deviations on the low-value-on-life side don’t necessarily have worse lives than other people; it’s just that they’re not as attached to their lives. I think I’m definitely pretty far on the low end.

But on the other hand, there’s a lot of people that I love in the world, and I have some sense that there are people in the world who feel the same way about me. So therefore I can understand that my death would make those people feel absolutely terrible, and I don’t want that to happen.

Sometimes I get sad and feel like the future isn’t going to be better than the past. I think the word that gets used a lot for this kind of prolonged sadness is “depression.” When this happens, there’s an absurd number of social barriers to talking about it openly. I feel like the number of friends I have, effectively, is suddenly reduced from dozens to one or two if I’m lucky.

So imagine that things are getting kind of hopeless and your effective friend number is down to two. You’re thinking about talking to these two people about your not-doing-great, but you have to stop and think about:

  1. Would this cause them unnecessary stress? Are they doing okay in their own lives?

  2. If you bring up allusions to suicide, would they do something dramatic against your will, such as call a hospital?

  3. If you do end up hurting yourself in some way, would they feel guilty about it forevermore because they couldn’t save you when they had the chance?

  4. What if they tell you that your life is great and people love you? How do you explain to them that even though those are facts, they have no relevance to how things are going inside your head?

  5. What if they think that you’re telling them this just because you want their attention or pity? Maybe that’s what you’re doing, subconsciously.

All these are fantastic reasons for you to keep silent. Also, there’s the fear that someone will never see you in the same way again once you admit to them that you’ve been looking at tables comparing various common methods of suffocation. It is generally not advantageous to come off as vulnerable or unstable.

That all just sucks. It’s shocking to me that anyone can learn to ask for help at all.

Earlier this year, I didn’t really feel like talking about suicide ever. Still, I observed thought patterns that were fascinating to me because they seemed unorthodox/taboo and yet rational in a way that often gets ignored in most conversations about suicide. I ended up writing them down in an essay and publishing them anonymously here.

After writing that piece, I found the nerve to talk to a few people. Those were some of the best conversations that I remember from 2013, and I think they’ve given me a new understanding of how friendship acts as a psychological anchor.

But there’s places where that anchor doesn’t fall deep enough. I get to those places sometimes and feel really alone and stuck. It helps to remind myself that things usually somehow end up getting better if I just wait it through.

The most subtle joke I’ve made all year

Dan Auerbach: Any doctor can prescribe any medication to anyone. That is a broken system.

Yan: Medication needs to be able to do doctor-pinning.

First day of work

Was great. Lots of tea and monitors.

IMG_20131203_190722

IMG_20131203_223732

Then I went home and cooked a surprisingly-phenomenal dinner with my housemates, the first time I’ve cooked in this house. Rhodey made potatoes with oranges, Mark contributed some wild rice, and I spun up yellow lentil daal with kale.

IMG_20131203_224326

We sang some Neutral Milk Hotel songs afterward, and the future looked bright.