Adults get whooping cough, too

I’ve had a cough for a couple of weeks now. It comes and goes during the day, and makes it harder to get to sleep. It’s mainly been annoying. But recently it’s been getting worse and waking me (and my partner) up several times at night. So my partner finally persuaded me to go to Urgent Care.

It’s not the first time I’ve had a cough like this – I had bronchitis several times in my teens and early twenties, and it produced the exact same cough. But I was surprised and dismayed to learn that there is a significant chance that this time I actually have pertussis (whooping cough).

I’ve always associated pertussis with young children – it’s firmly in the “childhood disease for which there is a vaccine” category in my head. In very young children – particularly infants who have not been vaccinated – pertussis infection is frighteningly likely to lead to hospitalization, severe complications or (rarely) death.

What I didn’t know is that pertussis is increasingly more common in adults and teens – particularly in a hotbed of vaccine exemption like Boulder, CO, where I live now. The local vaccination rate dropped below herd immunity levels years ago, and Boulder County (along with many other U.S. cities) experienced an epidemic-level outbreak of pertussis back in 2012 and 2013.

Pertussis cases in Colorado since 2011. from CO Dept of Public Health and the Environment
Pertussis cases in Colorado since 2011, via CO Dept of Public Health and the Environment. You can check CDC reports to see if your state has had a lot of cases of pertussis recently. If you live in Colorado, you can find the latest data on reported cases at the the state Department of Public Health and the Environment website. You can find out about the latest outbreaks in your city or county by doing an Internet search for the name of your city + “pertussis” or “whooping cough”.

It turns out that pertussis is still in Boulder. And adults are getting it. It’s not likely to kill me or put me in the hospital, but it is extremely unpleasant. And other adults like me who don’t realize what they have until they’ve already been contagious for weeks, are major sources of infection for more vulnerable populations.

Here is what I have learned about pertussis in the past 24 hours:

1) Pertussis is often missed in adults. A case of pertussis starts out looking exactly like the common cold, and is particularly difficult to distinguish symptomatically from other common respiratory infections. It can take a few weeks for the infection to progress from coldlike symptoms to severe cough. Many adults who contract pertussis never exhibit a “whooping” cough or experience any severe symptoms at all – so the true number of cases of pertussis may be somewhat higher than what’s reported to state and national public health agencies.

2) Adults are major spreaders of pertussis, in particular because they are harder to diagnose, and because we are often well through the most contagious period of the disease (the first two-three weeks) before we seek medical help and obtain a diagnosis – if we ever do at all. Finally, adults are less likely than children to have been recently vaccinated (see #7).

By the way, pertussis is *extremely* contagious. In a household where no one has current immunity, everyone will get it.

3) The coughing from pertussis can last three *months*. At its mildest, it’s ‘just a cough’. At its worst, the coughing can be so severe that it makes you vomit and/or experience sleep loss, cracked ribs, severe headaches and exhaustion. It sucks. It sucks a whole lot. You can get prescription drugs to treat it, but they make you drowsy; so during the day you have only cough drops and humidifiers and hot tea with honey to help.

4) You may never know with any certainty whether you actually had pertussis or not. No diagnostic test is 100% accurate. Pertussis testing is quite expensive, and kids always get priority over adults. Currently, your chance of getting a false negative result on a pertussis culture or PCR test increases over the course of the infection, because there are fewer and fewer bacteria in your mucus to detect1. Tests are also done by humans and subject to human error (contamination, etc.). So the benefit of testing may not outweigh the cost. Healthcare providers will probably assume you have pertussis AND other common respiratory bugs and treat you for all of them at once.

5) The best way to protect the most vulnerable populations isn’t early treatment or better diagnosis. It’s just not feasible to forcibly test everyone with cold symptoms early in their disease, nor is it in our best interest to blanket everyone with cold symptoms with the antibiotics we use to treat pertussis. We’d end up giving a lot of people antibiotics they don’t need, and accelerating the evolution of antibiotic resistant bacteria that will only come back to hurt us more.

So we will inevitably miss a lot of cases of pertussis in adults. The best way to protect ourselves and our loved ones from suffering from pertussis is prevention, through vaccination.

6) OTC cough meds are no more effective than placebo at reducing the duration of a cough.2,3

7) The immunity you acquire from pertussis vaccination or from contracting pertussis infection wanes over time. You need a booster every couple of years to maintain significant immunity to pertussis. If you spend a lot of time around young children or around adults susceptible to respiratory infection, you definitely need a regular booster.

8) Immunity isn’t the only benefit from vaccination. Vaccinated people not only less likely to get infected with pertussis, they’re likely to experience less severe symptoms than the unvaccinated.

9) If you get the right antibiotics within the first two or three weeks of illness, you stand a chance at reducing the duration of symptoms. After that, antibiotics will reduce your contagiousness, but may have no effect on the duration of your cough.


More information:

Featured image: The Flatirons in winter, Boulder, CO. CC BY-SA 2.5 Wikimedia Commons

First do no harm – with my health data

There are lots of circumstances under which your employer can legally get access to your personal health information. Just a few examples:

  • You file for a disability accommodation, a worker’s compensation claim or medical leave.
  • You use a company computer to browse for information about prenatal care.
  • You participate in a company wellness fair where your weight, blood pressure and body fat are measured.

You have to trust your employer not to use that information in ways that are harmful to you. Do you? Would you even necessarily know if they did?

The Wall Street Journal recently reported that some companies, like the steadily nefarious Walmart, are hiring outside “employee wellness firms” to mine employee data. The information they collect ranges from what employees buy and where to what prescriptions they’re getting filled—in theory, in order to identify employees with certain health conditions and make predictive suggestions to help manage their healthcare.1

Employers might use wellness program data to negotiate health insurance discounts (one of my past employers did). They might sell or inadvertently release your data to advertisers, too (that’s in addition to all the data you’re already giving away – more about that in a future post).

Wellness data is already escaping into what one expert calls “the great American marketing machine” that pitches products according to your diseases and lifestyles, privacy scholars say.2

Employers can also use health-related information to discriminate against you. For example, there’s already plenty of evidence of discrimination against overweight and obese people in the workplace7. But to add insult to injury, an employer might also discriminate against overweight individuals on the basis of health costs – thinking that it would save the company some health insurance and sick day costs to have fewer overweight people on staff, or that extra weight means extra health insurance and sick days costs which the company consciously or unconsciously compensates for by paying overweight people less salary.

As an employee, your legal protections against this kind of discrimination are spotty. There are laws meant to keep our employers specifically from using our health information to discriminate against us in hiring and from mishandling any of our electronically transmitted health information: the Genetic Information Nondiscrimination Act of 2008, Americans with Disabilities Act, Health Information Portability and Accountability Act. But federal law provides no protection against weight-based discrimination in the workplace (unless you are morbidly obese, in which case you can seek protection under the ADA), and all bets are off at small companies (less than 15 employees, according to the ADA). And much of the health data companies can collect about you isn’t subject to HIPAA at all.

Make no mistake; care for your well-being may not be the only or primary reason for an employer to sponsor a wellness program. They’re probably doing it because they think it’ll make the company more profitable. They may even show your health data to shareholders to encourage shareholders to keep or buy stock in the company.

Assume for a minute that the study showing that companies with wellness programs outperform the stock market is correct. That would mean portfolio managers would drive up the prices of companies with a low percentage of overweight employees. What then happens to the employment prospects of overweight workers? Why wouldn’t a company try to shed as many overweight employees as possible and hire fewer new ones in order to maximize shareholder value?6

If you’re feeling violated already, I don’t have to do much to convince you of the right-to-privacy issues with this development.

It shouldn’t be surprising that companies would invest in data-driven measures to reduce healthcare costs. All the incentives point in that direction. The U.S. has close to the highest per capita healthcare costs in the world and a strange system of employer-based healthcare3,4. A company that’s shouldering a major portion of your inflated healthcare costs would naturally look for ways to economize.

Besides sheer violations of privacy, there’s at least one more reason why we Americans should be uncomfortable with our employers mining our data to shape our health: employment and healthcare are already far too closely intertwined4. How many people leave a job, take a job or stay in a job largely because of the health insurance coverage it does or doesn’t provide? Access to healthcare doesn’t have to factor into employment decisions, and in most developed countries it doesn’t nearly to the degree that it does in the U.S.

I doubt this kind of thing is happening as much in developed countries with single-payer healthcare, or in countries that don’t bundle health insurance with employment.

You could see this as a reasonably pragmatic measure on the part of the employer. Everybody’s doing it; Google and Facebook, for example, make massive amounts of money off their ability to collect our personal and aggregate data and use it to customize our online experiences. That is to say, they’re selling wildly successful products like AdWords and Facebook Ads that are effective precisely because these companies can use all the data we give them, and more, to predict which ads we’ll be more responsive to. Why shouldn’t other companies benefit from big data analytics?

There’s absolutely nothing stopping a third party from mining all sorts of available information about you (information that we are all constantly giving away; more about that in a future post) and selling their insights to your employer. It’s one way that companies can (and do) blithely violate the spirit but not the letter of privacy and anti-discrimination legislation.

Here’s an example that already happened: The WSJ article points out that Walmart paid a firm to target employees with back pain with information about alternatives to surgical treatment, in an attempt to reduce the percentage of employees that opt for expensive spinal surgery1. If Walmart actually succeed at getting more employees to opt against spinal surgery, then we’ve seen an employer wielding measurable influence over employee healthcare decisions.

As long as some company or person stands to benefit from holding your personal, financial and health data, your data are a precious commodity to someone else. The true power of big data is being able to shape behavior with insights from readily available data. Do you know all the ways companies are using your data to shape your behavior?

Information is power; in the digital era, it is also money. Doubt not that many people and companies will be tempted by both the power and the money, and that many of us will suffer from or actively fight those who succumb to that temptation.

  1. From Yes, it’s an article about an article. I’d link to the WSJ original but it’s paywalled.
  2. From
  4. The short story on employer-based healthcare:
    a longer, in-depth story on employer-based healthcare:

Update 2016 Mar 05

U.S. police shooting data visualization: WaPo vs The Guardian

Perhaps the biggest challenge in evaluating the true nature of the problem of police shootings in the U.S. is the lack of national/official data on them. How do you honestly and legitimately evaluate something you aren’t measuring?

Thanks to Jeremy Singer-Vine’s Data Is Plural newsletter, I now know of two news organizations trying to remedy that data gap. The Washington Post and The Guardian have both published independent databases and data presentations. Looking at both of them, I can’t help but compare them and notice differences in the data presentation. While I’m very glad that both news orgs are collecting information on this important topic, it’s pretty clear to me that The Guardian’s presentation of the data is both more effective at conveying the insights that are in the data, and easier to use overall. Here’s what leapt out at me on first browse:


The Guardian makes a lot of excellent choices that sum up to a very effective presentation.

Good: Compact summary up top


Pretty much everything I want to know at a state or national policy level is in that summary. As with all news, it’s great to start with the most important stuff.

Great: Emphasizes apples to apples comparison

I especially love seeing that when the interactive page loads, it’s set to show normalized rates rather than absolute numbers per state or race (via the “per capita” and “per million” buttons). One of my pet peeves is comparisons that don’t take population differences into account. We’d expect California, for example, to have more total shootings than Wyoming because it has a lot more people; but the rate of shootings per person is actually higher in Wyoming.


I also love the choice of a persistent color coding on the state tiles that uses a gradient to show relative rates. The difference between 28 shootings per 100,000 people and 31 shootings per 100,000 is basically noise – so the detail of absolute ranking is not as useful as a more general low-medium-high comparison like the gradient provides.

I love the way the buttons highlight the difference between oranges to apples and apples to apples comparison:


While the emphasis in popular media has been on black vs. white victims, ala, it’s clear from the Guardian data depicted immediately above that Hispanics and Native Americans are also overrepresented among police shooting fatalities (though not nearly as dramatically as blacks).

Great: Presenting victims as both individuals and statistics


The combination of tiled image presentation with basic stats (name, age, state, manner of death) emphasizes both the victims’ individual humanity and their collective representation of terrible statistics. It suggests a calendar of violent deaths, a sort of shooting-a-day rhythm (a grim and effective visual concept).


There are several poor design choices in this presentation that make it substantially less effective than The Guardian’s.

Not so great: Emphasizing absolute numbers

WaPo leads with this:


That’s a crap ton of space used to convey almost no information. Don’t get me wrong; I’m a fan of whitespace, and especially using it to emphasize something important. But that number doesn’t mean much without a lot more context. How bad is that? How does that number compare to other countries? Is that number rising year-to-year (not a lot of year-on-year data for this topic, unfortunately)? For those of us who actually live in the States and don’t think of the entire country as an undifferentitated blob, where are those deaths occuring? Are they close to me? How does my home state or city fare?

The Guardian’s summary does a much better job of conveying the shape of the problem.

Not so great: burying infographics in a horizontal slider with obscure icons

After wasting so much space up top, WaPo buries their most interesting graphics inside a horizontal slider.

Did you catch that? There’s a slider in the middle of the page. Maybe it’s more obvious in the mobile version where the arrows aren’t so much smaller than the content.

The Guardian uses tabs with text labels (“map”, “list”) to make it more obvious what the various visual options are. WaPo invented some icons to help you toggle between the different visuals:


Do you know what a smushed pixellated US map means? Nope, I don’t either. As a web development and design professional, I am a member of the “Look closely, click everything and find out what it means” club, but I’m pretty sure most web users aren’t that pokey. If your icons need a legend or if someone needs to click on them to deduce what they mean, you should probably just use text labels instead of icons.

Not so great: combining filters with data table

I’ll bet someone thought it would be clever and resourceful to make the data table double as a filter for everything else. Like many clever things, it turned out to be confusing, and it creates more problems than it solves. Now the filters are somehow both huge and obscure.


Did you even realize that clicking on the “female” data would filter to female? No? I rest my case.

Terrible: displaying monthly data in a way that makes it difficult to compare months


Why the hell would you do this? To save on vertical space? Is there an editor at the WaPo who still thinks people don’t scroll on the web? Display the months on individual lines so they’re easier to compare:


It’s not really useful to show the number per month before you have at least two years’ worth of data – you can’t exactly infer seasonal trends from a single year. But it looks like WaPo built this thing to last and update over several years, so the monthly breakdown will become more meaningful over time.

Interesting: Squaring the states for an apples-to-apples comparison with geography in play


One problem with using a U.S. map to display rates as color-coded states is that it includes some information that’s not helpful and potentially distracting: land area. That is to say, if Texas and Rhode Island both have really high crime rates, Texas shows up as a big old lump of bad and Rhode Island barely shows up at all.

The Guardian avoids this issue entirely by displaying states only as tiles divorced from their geography. WaPo’s map of the states as equally sized squares attempts to address this issue and skirts the need to magnify Rhode Island and the other small states.

The one additional takeaway you really get from the squared states map is that the higher rates of gun violence are in the Southeast and West. There are some hitches to this ride, though. Something about the conversion to squares makes Kansas and Oklahoma look like part of the Southwest.


The more important and controversial the topic, the more important it is to get the data and the visual data stories right.
What do you think? Who did it better, and why? Which choices work well and why or why not?

Lessons from Drupalcon 2014

Though I have been increasingly more involved in larger and larger web development projects throughout the last five years of my day job and freelancing, I’m still fairly new to Drupal development. So I was excited to win a very fortunately timed ESIPFed scholarship to my very first Drupalcon, Drupalcon 2014.

I expected that open data and open source culture would go together well, and indeed they did at Drupalcon. Though the greater science community has been glacially slow in coming around to the idea of “open,” the folks I met at Drupalcon who came from science institutions were all friendly and generous with their knowledge in the true spirit of open source.

In addition to meeting other Science on Drupal people and getting a better feel for the greater Drupal community – which I now know is awesome and welcoming – I arrived at Drupalcon with a parallel mission: learn more about how to work with lots of other people to make great websites in Drupal. Now that I’m moving into full-time professional web development, I feel a strong need to learn from and leverage the superior skills and experience of my colleagues.

For the non-developers in my audience: getting from “I need a website” to launching one takes more than just technical skills. The difference between writing code and delivering a working website that meets or exceeds a client’s needs is like the difference between telling people you like bats and researching and implementing a plan to stop the spread of white-nose syndrome between the bat caves of North America.

To me, actually building a website is the smoothest part of the process. In my experience, the bulk of my time and effort goes into understanding what the client and website users want or need, communicating what you or your team can deliver and how you will deliver it, and adjusting your delivery as things inevitably go awry.

I set out at Drupalcon to learn more about tools and processes that can help make these “soft” tasks easier and more effective in a Drupal development setting. Two of the sessions where I picked up many important tips were the Axure prototyping workshop and the migration case study.

Why prototype a website?

Prototyping is a crucial communication tool for the design and testing process, especially for larger and more complex websites and features. It can be as simple as boxy drawings on paper or as involved as a full-scale, interactive prototype that is just a few tweaks away from the final product.

In any case, having something for someone to look at, maybe even click through, helps you gather useful feedback and adjust your design accordingly. Lots of early feedback can help you avoid wasting time building things that are wrong for the client. A prototype is one of several tools you can use to help prevent design conversations like this.

Should I prototype in Axure, Drupal, or something else? It depends.

Dani Nordin, who led the Axure prototyping session at Drupalcon, discussed some of the pros and cons of prototyping directly in Drupal. On the one hand, iteration is much slower in Drupal than it is on paper or in a lo-fi wireframe. On the other hand, Drupal, with its highly structured content models and interrelated everything, can be tricky to emulate in a prototype.

So when do you prototype in Drupal, and when do you use something else? First, you have to consider what the prototype needs to be able to do. Nordin pointed out that you don’t have to prototype everything, and some parts of a website are more important to prototype than others. You might focus on prototyping complex or unusual functionality that is difficult to translate into words, unique content, or chunks of development that would be a huge pain to undo if they didn’t work out.

Another prototyping question: who are you prototyping for? Axure, for example, is a sophisticated tool for quickly constructing and sharing interactive prototypes that can give you a pretty good feel for the navigation, content structure and interaction within a site – if you understand what you’re looking at.

Axure Page Layout
Designer’s view of an Axure prototype, including some navigational and interactive elements and reasonable mock content. From

I’ve been on the client end of a project that used Axure extensively for wireframing and information architecture review. I get the sense that Axure prototypes:

  • Are quick and easy to throw together
  • Generally facilitate communication between development, design and user experience professionals
  • Need a lot of accompanying narrative and potentially some real content to elicit useful feedback from clients

It’s easy to see the professional utility of tools like Axure at the lo-fi wireframe and interaction prototype stage. I am not, at this stage, mucking around with the colors of boxes and the sizes of drop shadows. As an information architect or interaction designer, I’m thinking about what kind of content needs to go where, what it should do and how it should relate to other content in the site. As a developer, this is exactly the kind of information I need to start building a site.

With my user experience hat on, I can further appreciate having the ability to use Axure to prototype and test quite a bit of site navigation and interaction before any part of the site is even built. Early testing can help me avoid a lot of headaches later on, such as when I realize late in the process that I need to totally rearrange the content and break all sorts of relationships and structures that I’ve already developed around.

But you may find it difficult to engage clients with lorem ipsum, empty boxes and Heading 1 placeholders. For people who aren’t design- or development-minded, there just isn’t much in a lo-fi prototype to react to. Much of your most important client feedback may not come your way until your clients have seen real content in your design.

And realistically, real content in a real design may not (probably won’t) happen until after you’ve already made some design decisions and development commitments. As Nordin put it during the session: “Who has the content before they start building? No one? Okay.”

(Aside: In case you are wondering how one designs a website without the content (words and pictures) that go into the web pages: one way is to use some sort of templating system. Content management software like WordPress or Drupal is essentially a sophisticated website templating system that web developers can customize. Someone with zero web development experience can then use these templates to create and publish new pages within a website.

Templates can be as straightforward as nice borders surrounding big blank boxes that people fill in like stationery. More abstracted templates anticipate the structure of future content and include spaces and behaviors for each distinctive piece of content.

Designing for a content management system is kind of like making a packing list for a 3-month backpacking trip. You want to pack as little as possible and re-use as much as you can; but you also want to pack enough to be reasonably comfortable and well-prepared for what lies ahead.)

Finally, if you’re building a Drupal site, you are probably building a site for clients who are going to be managing the content themselves. Drupal content management being what it is, your client may sign off on a prototype that leads to you developing something that turns out to be a nightmare for them to manage. To really get at the implications of certain design decisions, you have to test the Drupal content management interface with the content managers. That you can do only in Drupal itself.

Going big with small victories

I love studying how people solve big problems. Problems that are big in terms of sheer volume, and big in terms of complexity.

So I was fascinated by the technical implications of this truly big problem: migrating the world’s largest website onto Drupal.

The Weather Channel’s website has to serve up more than 2 million different and constantly changing forecasts in the U.S., and lots and lots of bandwidth-hogging media to boot. What’s more, the system has to be able to hold up to huge surges in traffic so it can continue to provide crucial emergency information during major weather events.

The answer to all these challenges wasn’t wholly contained in Drupal. Part of the performance solution that MediaCurrent and came up with was to chop up page templates into modular content and serve up different pieces simultaneously to speed up delivery. In other words, when you load a page on, these three things happen:

  • Drupal delivers a page template
  • AngularJS and edge side includes rewrite the page as it goes out to the browser
  • Data services layer delivers additional data to populate content

The page templates are optimized for caching and are cached at several locations around the U.S. by their content delivery network, Akamai, to further speed up performance.

Performance aside, it seems like the most compelling reason TWC went with Drupal was for its powerful and flexible options for content management. Among other things, the team was able to develop an ingenious system that blends widgetized content and custom additions to Panel module functionality. Content managers at TWC can use a grid system to configure re-usable templates and design variants, and it’s all designed to adjust to different devices and maximize re-use of content and templates.

So, I digressed into cool technical challenges. But I also had major takeaways from this presentation that weren’t technical: the importance of relationships and small victories.

First of all, the solutions architect from MediaCurrent said he invested some time in getting to know the in-house web developer team to better under the resources they had available. During the session, the rapport between him and the development lead at TWC was obvious.

Second of all, this team churned out a small side project early on: switching everyone over to content entry with Drupal. The switch led to a drop in support needs and an uptick in publishing velocity that validated their choice of Drupal for their platform – a quick win.

Website migrations, especially huge website migrations, have a lot more baggage and paint points to them than building a similarly-sized website from scratch. The more ground you’re covering with more people, the more can go wrong. I’m sure having an early, bite-size victory under their belts injected some momentum and positive energy into the project that buffered the team against the more grueling tasks that came later and are yet to come.

Getting it right with a project this large and complex takes a lot of close communication and careful planning. In that context, it pays to invest in good relationships and communication with your collaborators on large projects. You’re going to be stuck with those people for a while, and you’re going to be stuck with them even longer if it doesn’t go well because you’re not talking to each other.

As I mature as a developer and graduate to tackle ever-bigger and riskier projects, I’ll keep reminding myself that it’s important to aim for small victories and invest in my working relationships.

Doing something about discouraging data

I blogged earlier about some discouraging data on the involvement of women in professional CS. I also bemoaned the eye roll-inducing culture of computer geeks that I encountered at university. And I wondered (offline) about how and why the tiny minority of women programmers was holding up.

A few months later,  I’m thinking very seriously about joining their ranks.

What got me thinking this way?

Meeting, learning from, and learning with a ton of cool female developers, courtesy of Girl Develop It Boulder. Through GDI workshops I’ve re-discovering the fact that I love creating things with code. I love it so much that I can get lost in it for hours without noticing the time.

Don’t get me wrong; working in communications has been fun and challenging in its own ways. I’m deeply grateful to have picked up a lot of experience in project planning and people management. But I’ve been feeling for months that it’s high time to put those very deliberately earn “soft” skills to use on more complex technical and social challenges with a dedicated team. I’ve spent my entire career at academic/nonprofit/government institutions with old-fashioned management – it’s time to leap into the modern business world and find a place that adequately exploits my combination of technical savvy and immersion in people, culture and connection.

Fortunately, living in the heart of the Boulder tech community, I don’t have to leap too far (at least not in a geographic sense).

At this point, I’ve taken classes in HTML5/CSS3, Javascript, Git, Python and UNIX  server management, and I’m about to start a comprehensive bootcamp in web development. I’ll use what I learn to rebuild and streamlined this site. This WordPress theme has served me well for a long time, but it is time to move on and up!

WordPress to Drupal 6. Magic! Test post. Now messing with title.

UPDATE 2: Successful test! Manual import showed only one “updated” node, the one corresponding to this blog post. Content did indeed update to match first update. The other 9 TEST Top Stories were “new”. All 10 retained original “Authored on” date. Next test is to change the title of the post from “WordPress to Drupal 6. Magic! Test post.”It shouldn’t create a duplicate as I have set the GUID to be the guid!

UPDATE: Realized that Content > Feeds and Site building > Feed importers correspond to different modules. At this point, duplicates are resulting from the FeedAPI module (Content > Feeds) and the Feeds module (Feed importers) operating simultaneously. Now testing how the “replace existing node” setting in Feeds works. See if this update shows up in TEST Top Story on manual import!

This post should show up as a post of content type TEST Top Story on the dev version of my company’s Drupal 6 site that I’m working with a web support team to troubleshoot. I duplicated the feed importer configuration on the site and set it up to ingest content from my blog and convert each post into a node of type TEST Top Story. I also added a custom image node to my feed like the one that I added to my company’s WordPress blog.

Please note that I changed the parser to look for a GUID to populate the GUID field instead of simply duplicating the link. I’m hoping this will address the content duplication problem we’ve been having.

Content in the existing Top Stories feed and sorts by Authored on date – which corresponds to the day and time of content import. This should not change because we don’t want blog posts suddenly jumping to the top of Top Stories when we update them.

You should be able to manually import content into the TEST Top Story content type by going to Content list > (filter to TEST blog feed type) > TEST blog feed feed and choosing the “Import” tab.

I’ll add a random featured image here for fun.

Don’t worry, I will delete when test is done.

The science of science communication, summarized

No, do not do a Google image search for "scientist" if you do not want to be depressed and outraged.
Is this what scientists think of “the public”? Stock Photo from Getty via jacks of science.
I’ve identified as a scientist for most of my life, despite leaving at the end of my master’s to pursue a career in science communication. The biggest challenge for me in that career shift – next to learning to meet a zillion little deadlines every day in lieu of huge ones once every few years – was learning to be present and relatable.

By default I am the classic cerebral, shy, white-coat nerd type. I’m still constantly fighting my own tendencies to live inside my own head and spew evidence faster than others can process it – tendencies that the culture of academic science enhanced in me, even socialized into me and my former science colleagues. I think I’ve finally managed to internalize the notion that I’m not just trying to reach “the public” with science; in fact I am part of “the public.”

The point is, sometimes scientists need to be reminded of their own humanity. And who better to do that than humanities scholars?

There’s a whole issue of PNAS out dedicated to the science of science communication, based on a meeting of the same name that I at one point was dying to attend. It turns out many of the sessions were recorded and you can still view them online at the meeting website. Or you can go to the 2013 meeting!

I doubt I will make it to the 2013 meeting. But I have the videos and the special issue of PNAS to relish. One piece from the special issue, Communicating science in social settings, includes a summary and discussion of assumptions scientists often make about “the public” and “the media” that, based on lots of social science studies and extensive survey data, deserve further scrutiny. Here are my takeaways from that section:

  • 1. More information is not better.
    Resist the urge to summarize your entire body of scientific knowledge in one conversation. Make one point. Make it quickly and make it well.
  • 2. The public still trusts scientific institutions.
    There goes that excuse.
  • 3. Stories are much more powerful than lectures.
    How well do you remember the last three movies you saw? How well do you remember the last three two-hour lectures you saw?
  • 4. No one totally ignores his own worldview when interpreting scientific information.
    That includes scientists.
  • Discouraging data: women in CS and IT

    In making my mark in the realm of data and information visualization, it will probably do me good to become a better and more knowledgeable coder. I am now looking into pursuing a little more CS education, and am excited about diving into edX MOOCs in computer science (remember when edX was OCW?).

    I’ve never shied away from things technical. I enjoy every opportunity I get to learn new software and programming languages, and nothing sucks me into an-all absorbing work cave as effectively as a new Javascript, HTML or CSS coding challenge. I’m even considering diving much deeper into CS than just the basics. After all, the entry level pay for a computer scientist or software engineer is at least 1/3 higher than the entry level pay for people in my current line of work.

    However, these data give me pause:

    Looking at the BLS numbers, it is interesting that these professions attract more women (as a percentage) than software engineers (20.2%):

    • Bailiffs, correctional officers, jailers (26.9%)
    • Chief executives (25.0%)
    • Database administrators (35.3%)
    • Biological scientists (45.1%)
    • Chemists and materials scientists (30.0%)
    • Technical writers (50.4%)

    Even the professions that are said to have a glass ceiling (such as CEO) have more women in them than software development. Based on the number of science positions listed in the BLS data with substantial numbers of women in them, it is clear that the myth that women are afraid of math or science is just plain wrong (even if less than 1% of mathematicians are women). And given the bizarre outlier of DBAs at 35.3%, and technical writers at 50.4%, we can see that women certainly do not dislike computing fields in general.

    IT gender gap: Where are the female programmers? by Justin James

    Now I remember why I wasn’t attracted to CS at university. I would try to strike up conversations with computer geeks, and then get shut out of the weirdly intense technobabble tournament that every computer geek conversation eventually turned into. My work is now and was then a huge part of my life; but I learned very early that the people I surround myself with are at least as important as the work that I do. At the time, a choice of major seemed like a choice to surround myself with people like the people in that major for the better part of my adult life.

    I can’t be the only woman who looked at the majority culture of computer programmers and thought, is this it?

    #ESA2013 Ignite: Open Science

    I had sooooo much fun organizing my first Ignite talk session. I would do it again in a heartbeat. I met several excellent people and learned a lot about data, R and collaboration tools. I am also super proud of how awesome my speakers and moderator are, and how thoughtful and stimulating the discussion was.

    So I’m sharing it all like a proud session mama. Here are the session details from the program and, when available, the talks themselves:

    Sharing Makes Science Better

    Organizer: @sandramchung | Moderator: @jacquelyngill

    Scientists too often labor alone. The need to closely guard ideas during the race to immortalize them in professional publications can make the practice of science crushingly lonely and ill-informed by tools and knowledge that could make science easier and better. Occasional scientific meetings are often the only opportunities to share ongoing work and connect with colleagues outside of one’s immediate working environment. But there’s a fertile online science ecosystem of innovation, collaboration and mutual support that carries on all year round, and its lifeblood is a network of scientists and science lovers who openly share tools, data, knowledge and ideas that help all researchers to do stronger, better, faster science. The rapidly growing open source and online science communities suggest a new model of doing science in which we build our work on tools, data, knowledge and ideas that are freely offered and contribute our own in return. This session features several free and open-source tools that ecologists have created specifically to help fellow researchers do the work of ecological science, as well some other tools we didn’t create but have tried and found enormously useful. We encourage our colleagues to try them, improve upon them, and perhaps most importantly, share what they’ve learned so that others can benefit as they have.

    IGN 2-1

    Big Data in Ecology

    | @ethanwhite, Biology, Utah State University, Logan, UT

  • Slides and text
  • Increasingly large amounts of ecological and environmental data are available for analysis. Using existing data can save time and money, allow us to address otherwise intractable problems, and provide general answers to ecological questions. I will discuss why we should be actively using this data in ecology, how to get started, and give examples of what can be accomplished if we embrace an era of big data in ecology.

    IGN 2-2

    EcoData Retriever – automates the tasks of fetching, cleaning up, and storing available data sets

    | @bendmorris, University of North Carolina, Chapel Hill, NC

    Ecology often relies on data that has already been collected, and an ever-increasing amount of biological and environmental data is now available online. However, it can be difficult and time consuming to compile synthetic datasets from data files stored in various online repositories or research web sites. The EcoData Retriever is a community-centered tool that automates discovering, cleaning up, and organizing ecological data into the format of your choice. I’ll speak about problems solved by the Retriever and touch on future directions aimed at further utilizing community effort and the web to automate ecological data access.

    IGN 2-6

    R-based tools for open and collaborative science

    | @recology_ (Scott A. Chamberlain), Department of Ecology and Evolutionary Biology MS 170, Rice University, Houston, TX

    Open science is the practice of making the elements of scientific research – methods, data, code, software, results, and publications – readily accessible to anyone. While this has great potential for advancing research, the absence of an open science toolkit prevents open science from being more widespread. We are building bridges between data (e.g, Dryad) and literature (e.g., PLoS journals) repositories and the open source R software, a programming environment already familiar to many ecologists. These bridges facilitate open science by bringing together data acquisition, manipulation, analysis, visualization, and communication into one open source, open science toolkit.

    IGN 2-7

    Social media for scientific collaboration

    | @sandramchung, NEON Inc.

    Sharing Makes Science Better: Social Media for Ecologists from Sandra M Chung on Vimeo.

    Scientific research is about the nurturing of knowledge and ideas. And to knowledge- and idea-lovers, the Internet is a door to an infinite candy store. Social media provide a means to quickly access exactly the online knowledge you want – by filtering the grand store of information through interaction with the people, topics and communities that matter to you. I wouldn’t stop at just knowledge consumption, however. Sharing your science online can connect you with mentors and collaborators, sharpen and deepen your science, hone your communication and teaching skills, and even earn you funding.

    IGN 2-9

    The power of preprints: the open publication project for ecologists

    | @cjlortie, Biology, York University, Toronto, Canada

    Ideas are free but not cheap. Peer-reviewed publications are still the major form of accepted dissemination of ecological ideas. Even with open access however, this communication modality is outdated. Discussion, feedback, transparent review, versioning, ranking, and articulation of both idea development and peer-review are needed to accelerate scientific discovery. A new communication venue is proposed herein: archival of open access pre-prints similar to arXiv but with annotation, review, and discussion. Think stackoverflow + arXiv for ecologists; not a final step in the evolution of scientific communication but an affordable idea we need to explore.

    Add your Twitter username to your conference badge


    A Twitter sticker in action
    A Twitter sticker in action

    I designed these Twitter stickers in July 2012 to hand out at the ESA meeting in Portland during and after the social media workshop I ran with Jacquelyn Gill. They’re getting more and more popular (as of ESA 2013, I’ve handed out nearly all of the original 300 I printed) so I thought I’d share the artwork and information on how to order them.

    I used Sticker Mule to order custom 3.5″x.75″ rounded corner stickers that fit nicely below your name on your scientific conference badge (right).

    ADDENDUM 2013.08.12
    Feel free to re-use, modify and share the design. But please do not sell the stickers. They are for only personal and academic use.