How #DIYPS became a part of our engagement

August 26, 2014 by

Confession: I like to plan things. However, there’s one thing I can’t (won’t/wouldn’t) plan, which was our engagement.

Oh, did you hear? Scott and I are engaged! :)

Oh, did you hear? Scott and Dana are engaged! 🙂

Some time ago, I had joked that if/when we got to the point of making this decision, that he definitely couldn’t do it while I was low, because I would be feeling awful and I wouldn’t necessarily be able to remember every detail, etc.

Fast forward to this weekend

Scott and I hopped in a van on Thursday and drove down to Portland to get ready for Hood to Coast. Luckily, unlike my Ragnar run, our start time was a more civilized 9:30 am so we had time to see the sunrise before we started the race on Friday.

Sun rising viewed from Portland.

Sun rising viewed from Portland.

We headed up the mountain, and then I kicked off the race by running down the mountain (awesome). We then proceeded with the race, which like Ragnar, is completed by having 12 runners in 2 vans take turns covering the course until ultimately you end up in Seaside, Oregon to finish. There were some snafus on the course by the organizers, including an hour-long-plus backup on the road where you were supposed to be able to get to a field and be able to sleep in said field. However, by the time we had arrived, they closed the field, so we ended up getting less than an hour of sleep on the side of the road, smushed together in the van (7 people in one van!), before I had to crawl outside and force another few miles for my last run.

But, we finally finished, made it to Seaside on Saturday afternoon, and stood around waiting for our van 2 to fight through traffic and get there so we could cross the finish line together and be done! Everyone from our van (including Scott) ended up standing around in a crowd with a few thousand of our closest friends for more than an hour. Once they made it, we crossed, grabbed our medals…and were done.

Most of the team decided to head into the beer garden before splitting up for dinner and heading back to the hotel. I voted for heading back to dinner right away, because the sooner I ate, the sooner I got to finally go to sleep! Scott and I decided to head back.

Where I start unintentionally thwarting Scott’s plans

Scott suggested walking on the beach as we headed back, but as we skirted the finish line area, I realized how much it hurt to walk on the uneven sand with sore legs and hips from the run, and vetoed that idea. So we walked up off the beach to the esplanade/promenade, and walked down it instead. I called my parents to let them know we were alive after the race, and by the time I was off the phone, we were nearing the end of the esplanade. Scott asked one more time if I was sure I didn’t want to walk out to the beach. I think I said something along the lines of, “Fine, we can walk up to the beach, as long as we don’t have to walk up and down it!”

We walked over to the edge of the beach on the ridge, where you could see the fog and not much else, although there weren’t many people around. At that point, Scott was checking #DIYPS on his watch, and saw that I was dropping from the walk and projected to go low, so he suggested sitting down for a few minutes as a break before we walked the rest of the way back to the hotel for dinner, and starting a temp basal to help prevent a low. He told me to pick a spot, so I picked a comfortable looking log, and we sat down. Scott gave me a few Swedish fish to eat just to make sure I didn’t go low, and then while I was looking around at the fog, he started reaching into his pocket again and telling me he had another question for me.

I still had no idea what was going on (something about running a lot of miles and 1 hour of sleep), so I just looked at him. He pulled a box out of his pocket and opened it to another box, and I started to have an inkling of what MIGHT be happening, but my brain started to tell me that nope, wasn’t happening, what kind of person carries a ring in their bag/pocket all weekend around so many people without me noticing it, and didn’t we just run a race on one hour of sleep, and oh my gosh what is he doing?

By that point, he had flipped around and gotten down on one knee, asked me if I was ready to make it official, kissed me, and asked me to marry him. Thanks to #DIYPS I was *not* low, and despite the lack of sleep I had figured out what was going on at this point, and was able to say yes! 🙂

For those keeping track at home, despite my BGs starting to swing low which was supposed to be the reason we stopped to sit down…I later looked, and my BG never dropped below 99 during all of this! 🙂

My #DIYPS view is a little different now :)

My #DIYPS view is a little different now 🙂

 

 

What is #DIYPS (Do-It-Yourself Pancreas System)?

June 20, 2014 by

#DIYPS (the Do-It-Yourself Pancreas System) was created by Dana Lewis and Scott Leibrand in the fall of 2013.

#DIYPS was developed with the goal of solving a well-known problem with an existing FDA-approved medical device. As recounted here (from Scott) and here (from Dana), we set out to figure out a way to augment continuous glucose monitor (CGM) alerts, which aren’t loud enough to wake heavy sleepers, and to alert a loved one if the patient is not responding.

We were able to solve those problems and include additional features such as:

  •  Real-time processing of blood glucose (BG), insulin on board, and carbohydrate decay
  •  Customizable alerts based on CGM data and trends
  •  Real-time predictive alerts for future high or low BG states (hours in advance)
  •  Continually updated recommendations for required insulin or carbs
  • ..and as of December 2014, we ‘closed the loop’ and have #DIYPS running as a closed loop artificial pancreas.

You can read this post for more details about how the system works.

While #DIYPS was invented for purposes of better using a continuous glucose monitor (CGM) and initially tailored for use with an insulin pump, what we discovered is that #DIYPS can actually be used with many types of diabetes technology. It can be utilized by those with:

  • CGM and insulin pump
  • CGM and multiple daily injections (MDI) of insulin
  • no CGM (fingerstick testing with BG meter) and insulin pump
  • no CGM (fingerstick testing with BG meter) and multiple daily injections (MDI) of insulin

Here are some frequently asked questions about #DIYPS:

  1. Q:I love it. How can I get it?A: Right now, #DIYPS is n=1, and because it is making recommendations based on CGM data, we can’t publicly post the code to enable someone else to utilize #DIYPS.But, you can get Nightscout, which includes all of the publicly-available components of #DIYPS, including the ability to upload Dexcom CGM data, view it on any web browser and on a Pebble watch, and get basic alarms for high and low BG. We’re working to further develop #DIYPS, and also to break out specific features and make them available in Nightscout as soon as possible.
  2. Q: “Does #DIYPS really work?”A: Yes! For N=1, we’ve seen some great results. Click here to read a post about the results from #DIYPS after the first 100 days – it’s comparable to the bionic pancreas trial results. Or, click here to read our results after using #DIYPS for a full year.
  3. Q: “Why do you think #DIYPS works?”A: There could be some correlation with increased timed/energy spent thinking about diabetes compared to normal. (We’d love to do some small scale trials comparing people who use CGMs with easy access to time-in-range metrics and/or eAG data, to compare this effect). And, #DIYPS has also taught us some key lessons related to pre-bolusing for meals and the importance of having insulin activity in the body before a meal begins. You should read 1) this post that talks about our lessons learned from #DIYPS; 2) this post that gives a great example of how someone can eat 120 grams of carbohydrates of gluten-free pizza with minimal impact to blood glucose levels with the help of #DIYPS; and 3) this post that will enable you to find out your own carbohydrate absorption rate that you can start using to help you decide when and how you bolus/inject insulin to handle large meals. And of course, the key reason #DIYPS works is because it reduces the cognitive load for a person with diabetes by constantly providing push notifications and real time alerts and predictions about what actions a person with diabetes might need to consider taking. (Read more detail from this post about the background of the system.)
  4. Q:Awesome!  What’s next?A: We’re working on new features for DIYPS, of course.  Those include:
    • better real-time BG readings using raw unfiltered sensor values
    • calculation of insulin activity and carb absorption curves (and from there, ISF & IC ratios, etc.) from historical data
    • better-calibrated BG predictions using those calculated absorption curves (with appropriate error bars representing predictive uncertainty)
    • recommendations for when to change basal rates, based on observed vs. predicted BG outcomes
    • integration with activity tracking and calendar data
    • closing the loop – done as of December 2014! 🙂

    We also are starting to collaborate with medical technology and device companies, the FDA, and other projects and organizations like Tidepool, to make sure that the ideas, insights, and features in #DIYPS get integrated as widely as possible. Stay tuned (follow the #DIYPS hashtagDana Lewis & Scott Leibrand on Twitter, and keep an eye on this blog) for more details about what we’re up to.

  5. Q: “I love it. What can I do to help the #DIYPS project?”A: We’d love to know if you’re interested in helping! First and foremost, if you have any ability to code (or a desire to learn), we need contributors to the Nightscout project.  There are many things to work on, including implementing the most broadly applicable #DIYPS features into Nightscout, so we need as many volunteers, with as many different types of skills, as we can get.  For those who are less technical, the CGM in the Cloud Facebook group is a great place to start. Click here to see the Nightscout project roadmap; it shows what developers are currently working on, what each of our priority focus areas are (as of 11/26/14), and the ‘backlog’ of projects we know we want (and the community wants), but no one has started on yet (jump on in!).
    If you want to contact us directly, you can reach out to us on Twitter (@DanaMLewis @ScottLeibrand and #DIYPS) or email us here. We’d also love to know if you’re working on a similar project or if you’ve heard of something else that you think we should look into for a potential #DIYPS feature or collaboration.


Dana Lewis & Scott Leibrand

How I Became a DIY Artificial Pancreas System Builder

February 6, 2014 by

As some of you may know, I’m a Network and Systems Engineering type who works in the Internet industry.  I have a B.S. in Cell and Molecular Biology, but I have been working in computer networking and for Internet companies for the last decade and a half (since I was still in school), and have had the privilege of working on a number of different types of systems in that time.  Nine months ago, I met a wonderful person who happens to have Type 1 Diabetes, and began learning just what was involved in managing the condition.

On the one hand, I was impressed at the level of data available from her CGM (glucose readings every 5 minutes), and the visibility that provided into what was going on.  But I was also very surprised to find that the state-of-the-art medical technology she uses is, in many ways, stuck in the last century.  Data is not shared between devices; her pump looks and acts like it came straight out of the early 1990s.  (But at least it’s purple!) 😉

Most importantly, it is completely up to the person with diabetes (PWD) to collect all the relevant data on their current state, do a bunch of math in their head, and decide what to do based on the data, their experience, and how they’re feeling.  And as a PWD, that’s not something you just do once in a while: it is a constant thing, every time you eat anything, every time your blood sugars go high or low (for dozens of reasons), every time you want to go exercise, etc. etc. etc.  As a PWD, you’re dealing with life-and-death decisions multiple times a day: Too much insulin will put you into hypoglycemia and can kill you.  No insulin for too long will put you into diabetic ketoacidosis and kill you.  Overreacting to a high or low blood glucose (BG) situation and correcting too far in the other direction can completely incapacitate you.  So as a PWD, you never get a break.

So, very early on, the obvious question was: why can’t we integrate all this data and make this easier?  There were promising signs that it could be done: Artificial Pancreas Systems developed by various research groups and medical device companies are in clinical trials, and are showing promising results.  In fact, she had just signed up for such a trial, and I was able to go with her to the clinic and watch just how a fully automated APS system works.  But despite these brief hints of a better future, day to day we were (usually, she was) stuck doing everything manually.

It was very clear from the beginning that the primary bottleneck to doing something better was the ability to get blood glucose (BG) data off her CGM (a Dexcom G4) in real time.  We knew it was possible to do so manually using Dexcom Studio, a Windows-only software package that is actually quite good for analyzing historical BG data, spotting trends, etc.  But unless we were going to create a Windows macro to open up Dexcom Studio, import the data, export it to CSV, and then close Studio every 5 minutes, we couldn’t get our hands on the data to do anything in real time.

Then, we discovered that John Costik had figured out how to use the Dexcom USB driver’s API to get data off his young son’s CGM in real time, and display it remotely, even on his Pebble watch, and even when his son was at day care or kindergarten.  This was the breakthrough we needed, so we contacted John, and he was able to provide us a copy of his script.

So now we were off to the races.  I cobbled together a system using Dropbox for uploading the BG data from a Windows laptop (that ended up in a bedside drawer), and once I was able to get the data onto a VM server, got working on putting together something that would actually make a difference in her quality of life.

So we worked through a number of ideas.  First off, we needed something that could wake her up at night when her BGs got too high or too low.  The Dexcom G4 is supposed to do this, but even its loudest alerts are still not loud enough to wake a sound sleeper.  People recommend sticking the G4 in a glass or in a pile of change, but that doesn’t work, either. She also had used Medtronic’s CGM system; but even its loudest, escalating “siren” alert isn’t all that loud, even if awake.  We also needed something that would allow me to see the alerts remotely, and see whether she is awake and responding to them.  This is a big deal for many people, like her, who live alone.  So, I started coding, and she started testing.  We ended up using my cracked-screen iPad (it got into a fight with Roomba) as a bedside display (it’s much easier to glance over and see BG values than to have to find the CGM and punch a button), and also to receive much louder push notification alerts.

After we got basic notifications working, we also started trying to enhance them.  We ended up with a prototype system that allows the user to enter when they bolus insulin or take carbs, and snoozes alarms appropriately.  (This enables a de facto remote alert monitoring system, especially handy for individuals living alone.)  With that info, we are also able to do the calculations required to determine how much insulin is needed at any given point (the same calculations PWDs do in their head every time they eat or have to correct for high BG), and by doing those every 5 minutes as new data came in, we were able to provide early warning if she was likely to go low, or if she needed more insulin to correct a persistent high.  Since getting all of that working, we have also developed a meal bolus feature, which takes advantage of an (as far as I know unique) ability to track how fast carbs are likely to be absorbed into the bloodstream to give the user better estimates of how much insulin is required now, vs. how much will likely be required later as the rest of the meal is digested.

Since she started began actively testing the #DIYPS (Do It Yourself Pancreas System), we have seen a decrease in average BG levels, with less time spent low, and less time spent high. It also enables her to execute “soft landings” after a high BG and prevents “rebounds” after a low BG.  We are very encouraged by what we’ve seen so far, and are continuing to iterate and add additional alerts and features.  But today, the #DIYPS is just a prototype system, and has only been development tested with a single user, so lots of additional development and testing are required.  We are not yet ready to begin allowing (and helping) a large number of users to begin testing the system themselves, but hope to open it up in a manner that will allow as many people as possible to begin safely using the system as soon as possible.  However, this is currently only a 2-person side project.  We are looking to begin collaborating with anyone who has the skills and interest required to move the project forward, so if you think it’s worthwhile to move forward more quickly, I would encourage you to get involved (if you have technical or other skills that would be helpful) or pass along the word to others who might be interested and able to help.

If you’re interested, you can also read more about why a #DIYPS is needed now (and how it compares to true closed-loop automated Artificial Pancreas Device Systems).  We’ll also be publishing a full description of how the #DIYPS prototype works shortly, so stay tuned.

Scott Leibrand

Predicting IPv6 transition

March 29, 2013 by

Recently someone asked if I had any predictions about when/how the world will transition from IPv4 to IPv6.  After writing up my response, I thought it might be worth posting.  If nothing else, this will make it easy to check back in a few years and see how far off I was.  😉

So, the only predictions I can make with much confidence are qualitative, not quantitative.  As we’ve expected for awhile now, the the first step in IPv6 transition was always going to be the dual-stacking of major backbones and big content.  That is now essentially complete for the biggest content providers, and others can easily come on board with very little effort (if nothing else, they can put an IPv6-capable CDN like Limelight in front of their website).

But where it really gets interesting is to see where people actually start transitioning *off* of IPv4.  To date the only big deployments I know of there are people like Comcast renumbering their internal addressing (of set-top-boxes and the like) from private IPv4 to IPv6.  But moving actual customer traffic off of IPv4 onto IPv6 is much harder: Comcast has publicly admitted that due to lack of home router IPv6 support in anything but high-end routers like the Airport Extreme, they will only be providing IPv6 service to customers who plug directly into their modems and don’t use a router for the most part.
Mobile is another story, though.  As it happens, that is where almost all of the growth is (in number of subscribers, if not bandwidth), and as Cameron Byrne has presented at several conferences recently, there is a working IPv6-only solution for handsets called NAT464XLAT that allows them to use IPv6 on the internal network, and only use IPv4 on the external NAT pools for connections to the IPv4 Internet.  Since network and phone support are already there, I suspect they will begin gradually rolling it out mostly to test customers, with a plan to start cutting production customers over as soon as they make their last IPv4 request from ARIN.
Similar technologies (like DS-LITE) also exist for residential deployments, but will not be practical until IPv6-capable home routers begin shipping en masse.
It’s not clear to me how networks in Asia are dealing with their new inability to get new IPv4 addresses.  I suspect that since they were built from the ground up around heavy NAT, they are simply putting more subscribers behind the same number of addresses.  I’m not sure if they’re also deploying IPv6 much or not.
The other aspect I haven’t discussed is enterprise networks.  They tend to be very comfortable with NAT as well, so likely aren’t feeling a lot of pressure.
Many content providers will initially be fine with their existing space for organic growth, as new servers tend to be higher-capacity and require fewer IPs for the same traffic.  But large fast-growing content providers will probably place a high value on addresses (when you’re spending $10k/server, $10/IP is a rounding error) and will be willing to pay to acquire whatever addresses they need.
For some time, I think the supply of addresses will come mostly from organizations that are defunct or no longer need the addresses at all.  But if IPv6 transition drags on for awhile (as it probably will), we will get to a point where the marginal supply will be provided by organizations using the money to finance renumbering out of existing space onto some sort of NAT64 technology (like those highlighted above) with a much smaller pool of IPv4 kept for NAT and static-IP services.  As Lee Howard highlighted at a recent NANOG, that demand should be fairly elastic, as there are lots of people who can renumber out of IPv4 at the right price.
But getting back to more quantitative predictions, I think IPv6 deployment will follow a classic S-shaped curve, starting exponential, growing slowly at first, moving through the middle fairly rapidly, and then asymptotically approaching 100%.  As we’re still in the first exponential phase, it’s difficult to fit a good curve to the data and predict when it will take off.  Just guesstimating I’d say 5 years, or a range of 2-10 years (CI 90%), to reach the point where half of the devices are running IPv6 (either native or dual-stack).
-Scott

Predicting everything

November 12, 2012 by

So I just got back from an awesome workshop put on by the DAGGRE project this weekend, where I was invited to give a presentation.  I learned a lot, and had a chance to discuss some really interesting ideas with some really smart people. It all came together for me, over the weekend and on my trip home, into something that could possibly launch me into a new career, or at least be a really interesting project.

So the first thing I learned was that I really do seem to be good at forecasting and predicting things a lot of different things.  I sorta already knew that, as I had attended DAGGRE’s March workshop, did really well on their workshop prediction market, and went home committed to get myself up to #1 on the DAGGRE leaderboard.  I did so over the course of about 6 months, and as a result of my success got invited to give a presentation at this week’s workshop on my winning strategies.  But beyond that, I learned that I seem to have a broader knack for making well-calibrated and accurate predictions and estimates, over a really wide range of subjects.

It was also reinforced just how hard it has been to transition a lot of new forecasting techniques (prediction markets especially) into widespread use, often for a quite fundamental reason: we really don’t *want* better forecasts of many things in life, especially information about our own (organization’s) prospects.  IARPA is sponsoring DAGGRE and several other projects to develop these techniques for use in the intelligence community, but I’m much more interested in their broader application.  There does seem to be demand for forecasts about other people (industry trends, economic indicators, and all the stuff financial markets do), but most of those markets are already served by incumbents of one sort or another.  So in order to apply new techniques, you either have to convince those companies to adopt them, or you have to outcompete them and overcome all their incumbency advantages.  Either of those are way more difficult than just making good forecasts.

But a third thing I’ve noticed is that, except for a few narrow fields like election forecasting, there aren’t a lot of people releasing public forecasts of the likelihood of various events.  There is a lot of raw data out there, from Twitter feeds, to prediction markets like Intrade, to online sportsbooks, to options and derivatives on financial markets, but it is often left as an exercise to the reader to translate that data into anything representing a forecast, such as a probability of a certain event.  So maybe there’s an unfilled niche in making better public forecasts about the kinds of things people have already proven their interest in by the money they have on the line…

So here’s my idea:

Firstly, we should put together a general-purpose open-source prediction model that can take a large number of prediction-like inputs, such as options chains, and convert them into statements like “the market is predicting an X% chance of Y happening by Z date.”  Perhaps we run a website that continuously recalculates that probability based on market prices.  But perhaps in order to see the prediction, you first have to give us your estimate (maybe in Delphi form, as a best guess with a minimum and maximum probability and a % confidence).  We can then take all the estimates provided by the website’s users, and calculate (and present) them as something like “but our users are predicting an A% chance, with a 90% confidence interval of B%-C%”.  Obviously, that could be a source of insight into areas where the estimates of our users differ from those of the financial markets.  A naive Delphi forecast by itself isn’t probably going to outperform the market, but it may point to areas where additional forecasting effort using some of the techniques below might lead to some profitable options trading opportunities.

Once we have a large number of predictions being imported from financial markets (and maybe being estimated by interested users), we should also start linking them up using combinatorial methods.  Through a process such as Bayesian inference, such methods can be used to express the impact a particular event should have on any particular prediction.  For example, a worse-than-expected jobs report will have obvious impacts on S&P500 options and futures, but may have more wide-ranging direct and indirect impacts that could be usefully represented in the model, and used to flag market prices that may not have fully adjusted yet.  This is undoubtedly a simplistic version of the kinds of analysis that hedge funds are already doing, but it may be useful to present a publicly accessible model of how things are likely to change with changing conditions, and allow people to provide input on the inter-relationships, and also use the model themselves to do their own scenario testing, etc.

If such a model gets enough traction to be developed into something that can robustly represent (and collect estimates for) a large number of interrelated predictions about the world, it may also be useful to start incorporating other predictions, models, relationships, etc. in and linking them to the economic data.  For example, The Economist recently published an article about the coming transition to driverless cars, and all the changes that will bring to industries from hotels to insurance to the country pub.  It would be quite interesting to actually represent such impacts in a prediction model, in a way that they (and their many interactions) can be evaluated.  I’m pretty sure there are investment insights lurking there, in addition to all the impacts that people like transportation planners should be considering.

For another example, last year I predicted that unsubsidized solar photovoltaics would hit grid parity sometime between 2015 and 2020, that annual solar PV production could grow exponentially until it matches the annual growth in world energy consumption (maybe sometime next decade), and that after that point, the cost of solar PV and grid electricity will be inextricably linked, and will continue to fall over time.  I’ve done a poor job of blogging my thoughts on the potential impacts of that, but for one thing it would put us on a path to solving the climate change problem via a dramatic decrease in carbon emissions (and by eventually making energy cheap enough to make it cost-effective to actually pull carbon back out of the air).  That would of course also mean desalination would be so cheap that coastal areas would no longer have to worry about water shortages.  There are probably at least thousands of other ways that cheap energy would affect hundreds of different industries, and there’s no way I could think of even a fraction of them.  But if we had a model where different people could input their ideas about how such scenarios would affect the parts of the economy they know about, we might be able to come a lot closer to actually understanding the potential impacts of such a change.

Now, just because technological improvements will eventually eliminate the need for most carbon emission doesn’t mean we aren’t at risk of significant disruption between now and then.  But there are already a lot of really smart people working on estimating exactly how likely such scenarios are.  If we could also incorporate predictions from their models, and more importantly get them to link them via conditional estimates to the rest of the prediction model, that would create a tremendously useful tool for actually coming up with decent probabilistic estimates of the uncertainties, risks, costs, and benefits of various potential approaches to dealing with climate change.

Anyway, that probably captures the vision.  Obviously there’s a long road between here and there, but I can see several ways to start incrementally producing useful insight.  My next step will probably be to start doing some digging to see if there’s anything in the literature about how to convert options chains into probability estimates, or if I need to learn how to use Black-Scholes to calculate them myself.  If you know of anything related to that or any of the other ideas above, please point me to any information available online (or put me in touch with anyone I should talk to). I’m not really interested in reinventing any wheels here, but I think there’s an opportunity to put some existing parts together in a novel way to make something that’s actually useful (and who knows, maybe even be able to make some money with it).

Scott’s Voter’s Guide

October 28, 2012 by

So at least one person has asked if he could “copy my answers” and take advantage of all the research I’ve done in choosing who and what to vote for this year.  In case there are others, here’s my “answer key”.  🙂

I-1185: No

This is Tim Eyman’s latest effort to cripple state government, particularly by blocking transportation funding through tolling.

I-1240: Yes

Washington needs a limited number of well-regulated charter schools to help improve education for at-risk kids.

R-74: Approve

The state shouldn’t be in the business of discriminating based on sexual orientation.  That’s a job for individuals and religious organizations, and this initiative preserves their rights not to be involved in same-sex marriages.

I-502: Yes

The war on drugs has failed.  Marijuana is only about as dangerous as alcohol or tobacco, and should be treated similarly.

SJR-8221: Approve

This would average the state’s borrowing limit over 6 years instead of 3, so we can borrow during downturns as well as during booms.  It also reduces the total limit from 9% to 8%, on a slightly larger base (including property taxes).

SJR-8223: Approve

This would let the UW and WSU invest unused operating funds in the stock market, as is already allowed for state pension funds, the universities’ endowments, etc.

AV-1 (SB6635): Maintain

The Senate and House voted to eliminate a tax loophole.  Tim Eyman’s I-960 requires we approve it.  Duh.

AV-2 (HB2590):Maintain

The House and Senate voted *unanimously* to eliminate a tax loophole.  But Eyman’s initiative still requires we vote on it.  (Can you tell I don’t like him much?)

President: Obama (D)

Senator: Cantwell (D)

US Rep. District 2: Larsen (D)

Governor: McKenna (R)

Secretary of State: Wyman (R)

Treasurer: McIntire (D)

Attorney General: Dunn (R)

Lands Commissioner: Goldmark (D)

Insurance Commissioner: Kreidler (D)

38th LD pos 1: McCoy (D)

38th LD pos 2: Sells (D)

State Supreme Court pos 9: McCloud

Everett Council pos 5: Robinson

Snohomish PUD district 2: Vaughn

A moderate at a Republican convention

March 31, 2012 by

So today I attended the 2012 Snohomish County Republican Convention.  And yes, I think I really was the most moderate one there (of the people I met, anyway).  And I ended up as the whip.  Go figure.

It’s a long story.  Maybe a couple people will read it all the way through, maybe not.  🙂

So a month ago, I decided to attend my precinct caucus to participate in deciding the Republican nominee for President.  Since Washington’s caucuses were earlier this year, and the race is dragging on longer, this is the first time my presidential vote actually had a chance to count for something.  And I have an opinion to express: Romney is a decent candidate (as was Huntsman), but the other three remaining ones are way too extreme to get my vote.  I haven’t made up my mind who I’ll vote for in November, but I thought I’d do my part to make sure we had the best choices possible.  So I made plans to show up on that Saturday morning for my local causes.

When I arrived, it was a pooled caucus at Everett High School, meaning all of the local precincts were together at different tables in the cafeteria.  As it turns out, there were only three others from my caucus who showed up: Julie Lingerfelt, her husband, and their son.  Our precinct had 2 delegates and 2 alternates to fill, so despite the fact that all three of them were Ron Paul supporters and I was for Romney, we didn’t even vote on it: Julie and I were unanimously elected delegates, and her husband and son the alternates, to the County convention today.

Lesson one: 50% of politics is just showing up.

Now, being slightly CDO myself (that’s like OCD, but alphabetized, like it should be) I wasn’t going to just show up at the caucus, sit back, and do nothing.  No, I decided to read the proposed Snohomish County Republican Platform, and then further decided to revise it to something I could agree with.  Remember, I’m a moderate here.   So I did.  Now, a plan was taking shape in my head that maybe I could do my own little part to help moderate the Snohomish County Republican Party a bit, and get them more to the point where they’d get more support from independents and moderates like me.

So, after I revised the platform, I then decided to send it to the GOP’s 38th Legislative District chair for feedback.  I did get some good feedback, with advice on getting any amendments I wanted to propose to the County GOP office in advance.  So, having caught the bug, I thought I’d drive down there and see what the process was.  I talked to a couple ladies at the county GOP office, who had some additional info on the process, but more importantly, got me in touch with Bob Williams, the Platform Committee chair.  Having learned by this point that there were going to be over a thousand delegates at the convention, I narrowed down my list to about a dozen amendments that I thought might actually get support, and sent those to him.  Bob gave me some very good feedback, helping me eliminate some of them that didn’t really substantively change anything, and modify those that weren’t going to get consensus.  The end result was a short list of four amendments, which I sent over as motions to be added to the agenda for the Platform discussion.

Now, having done all this (and exchanged a few more emails), Bob apparently recognized that I had a bit of passion or something (that’s my CDO, remember), and got my name over to Natalie Lavering, who was helping him coordinate the Romney campaign.  Next thing I know, she’s asking me if I could be a floor leader for the Romney campaign for my 38th Legislative District.  (The way it works is the County convention breaks into about 8 caucuses, one for each legislative district in the county.)  She also invited me to a floor leaders’ meeting with the Romney campaign that evening. This stuff can move fast.

So, here I am, going to my first convention ever, and I’m supposed to be one of the floor leaders.  Ok, that seems doable.  Certainly there’ll be other people there who know what to do, right?  We talked strategy and tactics at the floor leaders’ meeting that evening, which basically boiled down to this:  The Romney campaign was, at the time, cooperating with both the Santorum and Gingrich campaigns to put forward a Unity slate of candidates, which would give proportional representation to Romney, Santorum, and Gingrich delegates according to the number of delegates each one had won at the precinct caucuses.  (The way the voting works at these conventions is you have to have a majority (50%+) to get anyone elected a delegate in the first three rounds of voting, after which it drops to a plurality vote.  So if you can get more than half of the caucus to agree on which of the 23 delegates they’re going to vote for, your slate wins.)

Now, as it turns out, the 38th LD that I live in is one of the less traditionally Republican in Snohomish county (representing urban Everett etc.), so it turns out that the Ron Paul folks had a plurality of the declared delegates in our LD, unlike in any other district in the county.  Romney was behind by only a couple delegates, and neither had a majority, but if you added up the Paul and Santorum delegations, along with their undecideds, they did.  And as we were wrapping up the meeting, there were already rumors that the Santorum national leadership was coming in to override the deal the local Santorum leadership had made with the Romney and Gingrich campaigns.  I still don’t know all the details, but suffice it to say that’s exactly what happened, and it sounds like it got ugly within the Santorum camp for a bit.  At least in our LD, there were some local leadership who wanted to stick with their party, and stayed on the Unity slate, and others who sided with the national leadership to do some sort of alliance with the Ron Paul folks (in the interest of denying Romney a majority and pushing for a brokered convention).

But, I didn’t know all the details of that yet when I showed up early on a Saturday morning for the caucus.  All I knew was that I was supposed to get myself signed in, and report to the Romney table, where I’d get some stickers, a floor leaders’ cap, and this Unity slate they kept talking about.

So that’s what I did.  But Kurt, our poor LD chair, was getting overwhelmed checking in all 178 delegates, plus alternates, so I offered to come back and help him after I got checked in.  Even with additional help, that took until after 9am when the convention and speeches started, and meant that I got a late start on collecting the 5 signatures I was supposed to get for each of my 4 platform amendments and turn in to the Secretary by 9:30.  But, I went ahead and got them signed, and caught her once she finally stepped down from the dais for a minute to hand them over.

So, I was finally back on track.  And now we just had to wait for the folks checking everyone’s credentials to finish their work and come give their report, which meant more speeches.  McKenna’s was pretty good: I agree with a lot of what he has to say.  But a lot of the others were typical red meat conservative stuff, which I found myself laughing at more than applauding to.  Anyway, I finally got myself seated with my LD before the speeches finished.  Once they did, and the Credentials Committee gave their report, we finally got down to the first order of business: selecting the permanent Chair for the rest of the convention.  I knew it was important that we make sure we didn’t have someone with the goal of delaying the convention, so I figured this was important, and made sure I knew whom to vote for.  It turns out, though, that this was the part of the main convention that delayed us the most, because the initial voice vote was declared before it was unambiguous, so we had to go to a show of hands.  But then the three people who counted the hands couldn’t agree on the count, with counts ranging from 590 to over 800, with 622 required for a majority.  So then we had to all stand up and count ourselves off as we sat down.  Someone obviously didn’t know how to count, because we got to 640 and stopped while there were obviously still over a hundred people standing.  Anyway, with that finally out of the way, we got do the discussion of the rules.  There were a couple motions, both defeated, regarding nominations and candidate speeches.  And then, FINALLY, we got to adjourn to our LD caucuses, and more importantly, get lunch.  (I think it was well after 1pm at this point).

Over lunch, the Romney campaign finally distributed the Unity slates.  There was a Delegate slate and an Alternate slate, and I had no idea who was who on it (except that I had made the cut for the delegate slate, woohoo!).  After a few minutes to get (but not finish eating) our lunch, we got started on the real business, the 38th Legislative District Caucus.  We had another contested vote for the chair of the LD caucus, but went straight to stand-up-and-count-off, and got through it pretty quickly.  Kurt got elected as permanent Chair, but with only a plurality (there were probably some people still doing lunch or something).  So we finally had ourselves a caucus and could get down to the business of nominating delegates, letting them give their 30s speeches, and voting on them.

Now back when we got our call to the convention, there was a paper included where any interested delegate or alternate could get their name added to the pre-printed ballot for delegates to the State convention.  I had done so, but once I started really looking through our Unity slate, it turns out there were a lot of names on the slate of people who hadn’t.  That was a bit concerning, but I knew we got to do floor nominations, so we got down to the business of nominating everyone on our Delegate slate from the floor.  As we did so, we discovered that there were four of our 23 who had refused nomination, or for some other reason didn’t end up as candidates.  So, we had to figure out who to vote for, in addition to our remaining 19 (who, remember, were Gingrich and even a couple Santorum delegates in addition to Romney).  So we all gave and listened to everyone’s 30s speeches, and tried to figure that out for ourselves.

And then, finally, we got to the first ballot.  All the declared Romney supporters knew they were supposed to vote the slate, and I think most of them did, but we didn’t have the numbers to get anyone through on the first ballot, and in fact the non-Unity camp managed to get 4 (turns out they were all for Santorum).  So, after this, I finally realized that we needed to be coordinating better.  I had thought someone else would be coordinating things, but apparently not.  So, having seen the Paul camp run off to the back of the room to huddle, I called over all the Unity supporters to the side of the room and attempted to get everyone on the same page.

I don’t remember what all was said, but I think we managed it, because on the 2nd ballot only one more (Santorum) go through, and no one advanced on the 3rd ballot.  Turns out I was worried about the wrong thing, though, because I thought we’d lose more than that on each ballot (remember the numbers were against us), so I was trying to figure out who we’d eliminate (as a group) if we didn’t have enough slots.  But it turns out the 3rd ballot was important for another reason: they eliminated everyone except the top 36 candidates (to fill the 18 remaining slots), which meant if we hadn’t been coordinated, we wouldn’t have had enough of our folks to vote for on the 4th and final ballot.  The 4th ballot runs by different rules: it only requires a plurality, and the remaining slots are filled by the top vote-getters all the way down.  We voted our slate and ended up with 6 of the 18 remaining slots.  The ones of ours who won were the long-time well-known Republicans on our slate: no big surprise there.  Newbies like me didn’t make it through, and given the numbers we had, I wasn’t really expecting to be elected a delegate.  Turns out there were about 11 Paul supporters elected delegates, the 5 Santorum ones, the 6 from our slate, and maybe one more Gingrich delegate.

I was still hoping to be an alternate in Tacoma, and knew at least some alternates would be seated as delegates there, so we reconvened yet another Unity caucus meeting and decided as a group who we’d nominate and vote for as alternates if they didn’t win as delegates.  (Because of the timing, the only time we had to caucus and strategize was actually after each vote, but *before* the results came back from the vote-counters.  So each time we were planning our next move without knowing the results of the last one.)  Our delegates did a pretty good job coordinating, but unfortunately it was getting late, and I think we were losing people.  (The Paul camp is more organized, and better at sticking things out.)  In any event, we only managed to get one alternate elected in the first round and two in the second round.  And none of them were very highly ranked, which means they’re less likely to get promoted to delegates in Tacoma.  And no, I wasn’t one of the three.

Now, by this time, it was after 8pm, and there were some LDs that were moving a lot more slowly than we did.  So it was clear we weren’t going to get to the platform after all, and that they’d have to reconvene us all at a later date and try to get a 40% quorum.  Which means I haven’t (yet?) had a chance to present or discuss my 4 platform amendments, which was one of my main goals in being there.  But as often happens, life is much more interesting when it doesn’t go as you had planned.  Our caucus was a lot of work, but a lot of fun, too.

So, that’s how I, the most moderate person (I know of) from the least conservative legislative district in the county, ended up as the whip for the Romney campaign at our LD caucus.  (And if you’re not familiar enough with political jargon to understand why that’s ironic, the whip, in a legislative body, is the legislator who rounds up the votes for their party.  They’re usually one of the most partisan legislators that party has.)

So: Lesson two: apparently the other 50% of politics is just speaking up.

Our exponential solar energy future

July 6, 2011 by

So about 5 years ago or so I became aware of the fact that, based on a number of analyses showing the costs of solar photovoltaic cells decreasing exponentially over time, the cost of solar-generated electricity appears to be subject to Moore’s Law style economics.  As a result, I’ve been predicting that we’re going to see solar PV hit grid parity soon: I was thinking it’d happen sometime between 2015 and 2020.  (Apparently I wasn’t the only one: President Bush predicted the same in 2007.)   But it looks like we may get there early: FT predicts unsubsidized grid parity within 3 years.  And once you include subsidies, we’re already there, which is why Google  just invested $280 in SolarCity, which finances rooftop solar projects.

So as PV prices continue to come down with additional technology improvements and economies of scale, we’ll soon see demand skyrocket as everyone realizes they can get financing to put up solar cells and save more in electricity costs than they pay in lease (or interest) payments for the system.  What then?

At the moment, solar PV is a tiny fraction of total energy production, but is growing exponentially.  According to the Renewables 2010 Global Status Report, “Between 2004 and 2009, grid-connected PV capacity increased at an annual average rate of 60 percent.  An estimated 7 GW of grid-tied capacity was added in 2009, increasing the existing total by 53 percent to some 21 GW.”  Total world energy use is about 16TW, growing at about 5%/yr.  If you look at actual production and not just capacity, solar PV production is below 0.1% of total energy use.

Given how hard solar PV manufacturers are working to ramp up production exponentially at 60%/yr (doubling every year and a half or so), the increased demand we are likely to see (as prices drop low enough to make it worth switching to solar) is likely to result in price declines flatting out somewhat, and the growth in production to continue on its exponential trajectory (perhaps even accelerating slightly).

For how long?  Well let’s run some math on the exponential growth and see how long it’d take for solar production to ramp up to handle all of the world’s growth in energy consumption:

At year n, world energy use should be at approximately (16 TW * 1.05^n) and annual growth should be 5% of that: (.05 * 16 TW * 1.05^n).  If we add in a 25% load factor to account for the fact that the sun doesn’t shine all day, solar production should be about (.25 * 21 GW * 1.60^n) and annual growth about 60% of that: (.60 * .25 * 21 GW * 1.60^n).

Plugging that into WolframAlpha gives us n=13, which means that at current growth rates, annual solar PV production could surpass the annual growth in world energy consumption sometime next decade.  Even if solar PV growth slowed to 43%/yr, we’d still break even by 2020.

Such exponential growth likely won’t continue that long, but just to illustrate the power of compounding, if it did, then solar capacity would exceed total world energy use (on current growth trends) less than 10 years after that, by which time the world would be using 40TW of energy.

So, pretty equations are nice, but what does that mean will likely actually happen in the messy real world?  My prediction is that prices will continue to decline for a few years until we’re obviously past unsubsidized grid parity.  At that point price declines will flatten out and production will continue to expand a a very high exponential growth rate.  During this time, grid electricity prices will remain fairly flat as well.  Once we hit the point where annual solar PV production exceeds annual energy demand growth, demand for PV will start to drop, and prices of both grid electricity and solar cells will start to drop as well.  As solar PV costs drop, more and more fossil fuel plants will be idled as their marginal cost of production start to exceed even the fully amortized cost of solar PV.  At that point, the cost of solar PV and grid electricity will be inextricably linked, and will continue to fall over time.

So what would the implications of all this be?  I’m sure you can think of a number of them, and I’m sure there are a number of them none of us have thought of yet.  I have some thoughts as well, but I’ll save them for a follow-up post.

-Scott

On Emergence, Intelligence, Consciousness, and Free Will

February 17, 2009 by

Emergence is the central idea that can explain a wide variety of complex phenomena that resist analysis through reductive reasoning, such as intelligence, consciousness, and free will.

Intelligence

Intelligence is a property that emerges from a sufficiently large, interconnected, hierarchical system that specializes in pattern recognition and prediction. The emergent property we call intelligence is characterized by the ability to simultaneously identify patterns and predict upcoming sequences at widely divergent levels of complexity, from the simplest patterns of raw sensory data up to the most complex meta-patterns imaginable. The degree of intelligence of a system is generally measured by the level of complexity at which it can identify patterns and make well-tested predictions. A common, though not required, characteristic of intelligent systems is the ability to interact with the outside system and effectuate changes that give predictable results. This allows for much more effective testing, disambiguation, and refinement of pattern identifications, and is directly analogous to the difference between the ability to perform controlled experiments vs. having to rely on natural experiments. However, just as macroeconomics and astronomy are still sciences, despite their inability to perform many controlled experiments, so an intelligent system is enhanced by, but does not require, the ability to effectuate behavior in order to be characterized as intelligent.

An intelligent system must receive inputs (sensory input) from an outside system (such as the real world) that exhibits structure and some degree of predictable patternicity in both the sequence of inputs (temporality) and across sensory modalities. Each component of the system (a cortical column, in the case of the mammalian brain) must be able to process the pattern of inputs (either from the senses, or more commonly, from lower and higher level components), identify the most likely pattern represented by the current sequence of inputs, predict the upcoming sequence of inputs, and communicate this prediction, both downward to the lower-level component producing the sequence, as well as upward to higher-level components. Each component must use higher-level components’ predictions as input to help resolve ambiguities in characterizing the pattern represented by noisy input from lower levels. Each component should treat successful predictions as confirmation of the current sequence-to-pattern mapping, but should treat falsified predictions as grounds to attempt to find a new pattern that fits the current input sequence. Any changes in the currently identified pattern should be communicated to all higher-level components, so they can treat the change as a prediction failure as well.

See also On Intelligence, by Jeff Hawkins

Consciousness

Consciousness is a property associated with, but not identical to, intelligence. It is characterized by (and commonly defined as) self-awareness. In terms that don’t involve circular definition, this means that a conscious system must be able to use as input its own internal states, and must be able to exhibit intelligent behavior (pattern-recognition and prediction, see above) with regards to that data, just as with external input. Because the inputs (internal states) are directly affected by the process of analyzing those states, conscious entities have, by definition, a degree of behavioral control over their inputs. Whether and how this control is exercised is dependent on the level of free will (see below) of the system. However, a system need not exhibit free will to exhibit a significant degree of consciousness: even a completely pre-programmed system, equipped with a proper feedback/regulatory system based on internal states, would be considered conscious to a very rudimentary degree.

As with intelligence, consciousness is not a binary attribute, but should be measured in degrees. It also does not require a human level of intelligence: an animal should be considered to have a level of experiential consciousness, to the extent it can (presumably) identify the pattern of internal states as what we would call a “feeling”. Many such animals also exhibit a degree of self-consciousness, as demonstrated by their ability to recognize themselves in a mirror. Social animals (primates and cetaceans, for example) also exhibit social consciousness, in that they can use their own instinctual behavior (such as arousal states, the facial flush of embarrassment, etc.) as an input toward intelligent understanding of the social situation. And, of course, human-level consciousness is characteristically associated with an ability to combine the awareness of internal states with rational deductive and inductive reasoning, and separately with the ability to use language to further refine the ability to understand, explain, and thereby predict one’s own behavior.

As with intelligence, consciousness is a property that emerges gradually with the increasing complexity of a properly organized system. It is likely that many of the same organizational principles that contribute to intelligence are also responsible for the emergence of consciousness. However, consciousness remains a more difficult property to study.

Free will

An intelligent system, particularly one with self-consciousness, needs to be goal-directed to function effectively. As mentioned before, free will is not a necessary attribute of intelligent and/or conscious systems: such a system can be pre-programmed, through evolved instinct or programmatic design, with certain goals, homeostatic ranges, internally reinforced behaviors or outcomes, etc. To the degree a system is solely controlled by such genetically-determined attributes, and/or by its immediate external environment, it does not exhibit free will. However, if the system is sufficiently intelligent and self-conscious, it becomes possible for the system to examine and change its own goals. This process is very familiar to us as humans: many of the most important parts of our lives are the times when we are attempting to determine what we want for ourselves.

As with intelligence and self-consciousness, free will is an emergent property, in that it does not arise from any single source. Rather, it is the product of a large number of complex interactions between innate drives and predispositions, values and goals imparted externally from sources we are emotionally attached to, and the complex interplay of personal experience and the processes of consciousness.

While the sources of free will are most easily expressed in human terms, it is not a uniquely human trait. As with intelligence and consciousness, humans exhibit free will to a greater degree than any other observed system. However, animals also exhibit free will to some degree, and there is no reason to think that an artificial system, if it is complex enough to exhibit sufficient intelligence and self-consciousness, and if it is given the freedom to modify itself, would not also exhibit free will.