Reading List April 2020

5 minute read

Published:

Two HCN papers and an outlook on Julia (the programming language).

Subunit-specific modulation of HCN1 and HCN2 channel currents by a general inhalational anesthetic

I came across this paper by Chen et al. (2005) at the University of Virginia, in which the authors show a very extensive investigation of how an inhaled anesthetic, halothane, modulates the activity of HCN1 and HCN2 channels - two HCN isoforms that are prominent in the brain. I’m not a big fan of structure-function studies (e.g. Porro et al. (2019)), because I find it difficult to track individual residues. On the other hand, I find that chimeras and concatamers tend to yield results that are more intuitive to interpret.

The general anesthetic halothane was observed to inhibit HCN1 and HCN2 channels differently. To understand the structural basis of halothane’s isoform-specific effects, the authors generated a number of chimeric HCN channels by swapping various domains between the HCN1 and HCN2 isoforms. Stunningly, the subunit-specific effects of halothane inhibition were abolished when the CNBDs (cyclic nucleotide binding domains; large, intracellular structures) of HCN1 and HCN2 were removed (by cloning). Not only that, but the addition of cAMP to the pipette changed the effects of halothane inhibition.

I really like the approach of using domain-swapped constructs to identify isoform-specific differences. To my knowledge, it was pioneered in HCN channels by the Siegelbaum group (Columbia University), who used it extensively throughout the 2000s to great success. One of their well-known publications from this period was Wang et al. (2001).

Wang et al. (2001)

This paper gave strong support for the idea that the cytoplasmic domain (CTD, including the CNBDs mentioned above) had an autoinhibitory effect on HCN channels, because, in the absence of cAMP, HCN channels were harder to open. This is typically quantified by the voltage that elicits half-maximal conductance, or open probability - the V1/2. In HCN2 channels, V1/2 is shifted +15mV by saturating levels of cAMP, whereas the V1/2 of HCN1 channels only shifts about +5mV. Thus, HCN1 channels seem, in the absence of cAMP, to already be maximally shifted. Surprisingly, when the the CTD was swapped between HCN1 and HCN2 channels, the voltage dependence of activation changed accordingly: HCN2 channels with the HCN1 CTD shifted by only +5mV, whereas HCN1 channels with the HCN2 CTD now displayed a large shift of ~+20mV. The authors went further to identify a more precise structural mechanism of cAMP modulation by swapping smaller pieces of the CTD.

Thus, Wang et al. (2001) concluded that the CTD in HCN channels has an inhibitory effect that prevents the channel from opening at more positive voltage. This autoinhibition is relieved when cAMP is bound. Th difference in cAMP response between HCN1 and HCN2 can then be explained as thus: the HCN1 CTD exerts weaker inhibition than the HCN2 CTD does (on their respective full-length channels). As a consequence, cAMP-dependent relief of autoinhibition produces a smaller effect on HCN1 (because it starts from a less inhibited state) than in HCN2.

Cytoplasmic autoinhibition in HCN channels is regulated by the transmembrane region

Related to the domain-swapping and autoinhibition ideas discussed above, this paper by Page et al. (2020) at Simon Fraser University (our neighbours!) explain how CTD-mediated autoinhibition can interact with the transmembrane domains (TMDs) of HCN channels. Here, the authors show how the voltage sensitivity and kinetics (of opening and closing) were modified in chimeric HCN2 channels containing the TMD from HCN4. Given that swapping the CTD can have dramatic effects (see above), it is not unexpected that swapping the TMD would as well. What is surprising, however, is the ways in which these effects manifested.

Think Julia

In my search for ways to optimize my code, I’ve recently picked up Julia and have enjoyed learning the language. Unlike Python, Julia incentivizes having a deep knowledge of object types (e.g. arrays, tuples, reals, etc.), which I’ve only seen in Cython. Julia is somewhat similar to Matlab in syntax (e.g. 1-indexing, ‘end’ statements in loops and functions, etc.), and it can require a bit of mental gymnastics when actively coding in both Python and Julia. Think Julia is one of many great, practical references that I recommend to anyone starting out with Julia. I never had the patience to go through anything similar for Python, but now that I have more familiarity with programming, I can appreciate how important it is to develop your principles first.

In particular, I like the DifferentialEquations.jl package in Julia, as well as the Atom editor. I’ve never used a REPL for Python, but it is absolutely essential for Julia, because packages take forever to compile initially, but this only needs to be done once if you use a REPL - fantastic! However, I am most accustomeed Notepad++ as an editor, which I sorely miss sometimes when using Atom. Unfortunately, Notepad++ doesn’t have a built-in Julia style, so it’s a bit cumbersome to use for Julia.

Finally, I have been trying out various packages for doing MCMC to estimate posterior densities of systems of ODEs. I’ve tried some packages in Julia, including Stan.jl, DynamicHMC.jl, and Turing.jl, including their wrappers in DiffEqBayes.jl, but the learning curve is fairly steep. I initially wanted to start with Stan in R, but learning Stan itself proved incredibly challenging. Then, I went and got stuff nearly running in Julia, but there were some very complex type-related issues that I couldn’t figure out. So, I’m now resorting to PINTS in Python, which has been working alright. I’m aware that it’s not the most efficient/powerful method out there, but hopefully it works out. So far, my laptop (with 8GB RAM) has been running for 2 days straight, and it’ll probably need to keep runnning for another week at this rate… RIP.