U.S. police shooting data visualization: WaPo vs The Guardian

Perhaps the biggest challenge in evaluating the true nature of the problem of police shootings in the U.S. is the lack of national/official data on them. How do you honestly and legitimately evaluate something you aren’t measuring?

Thanks to Jeremy Singer-Vine’s Data Is Plural newsletter, I now know of two news organizations trying to remedy that data gap. The Washington Post and The Guardian have both published independent databases and data presentations. Looking at both of them, I can’t help but compare them and notice differences in the data presentation. While I’m very glad that both news orgs are collecting information on this important topic, it’s pretty clear to me that The Guardian’s presentation of the data is both more effective at conveying the insights that are in the data, and easier to use overall. Here’s what leapt out at me on first browse:

THE GUARDIAN

http://www.theguardian.com/us-news/ng-interactive/2015/jun/01/the-counted-police-killings-us-database

The Guardian makes a lot of excellent choices that sum up to a very effective presentation.

Good: Compact summary up top

The_Counted__people_killed_by_police_in_the_United_States_in_2015_–_interactive___US_news___The_Guardian

Pretty much everything I want to know at a state or national policy level is in that summary. As with all news, it’s great to start with the most important stuff.

Great: Emphasizes apples to apples comparison

I especially love seeing that when the interactive page loads, it’s set to show normalized rates rather than absolute numbers per state or race (via the “per capita” and “per million” buttons). One of my pet peeves is comparisons that don’t take population differences into account. We’d expect California, for example, to have more total shootings than Wyoming because it has a lot more people; but the rate of shootings per person is actually higher in Wyoming.

WY_CA_shootings

I also love the choice of a persistent color coding on the state tiles that uses a gradient to show relative rates. The difference between 28 shootings per 100,000 people and 31 shootings per 100,000 is basically noise – so the detail of absolute ranking is not as useful as a more general low-medium-high comparison like the gradient provides.

I love the way the buttons highlight the difference between oranges to apples and apples to apples comparison:

Shootings_race

While the emphasis in popular media has been on black vs. white victims, ala
http://mappingpoliceviolence.org/, it’s clear from the Guardian data depicted immediately above that Hispanics and Native Americans are also overrepresented among police shooting fatalities (though not nearly as dramatically as blacks).

Great: Presenting victims as both individuals and statistics

The_Counted_individuals

The combination of tiled image presentation with basic stats (name, age, state, manner of death) emphasizes both the victims’ individual humanity and their collective representation of terrible statistics. It suggests a calendar of violent deaths, a sort of shooting-a-day rhythm (a grim and effective visual concept).

THE WASHINGTON POST

https://www.washingtonpost.com/graphics/national/police-shootings/

There are several poor design choices in this presentation that make it substantially less effective than The Guardian’s.

Not so great: Emphasizing absolute numbers

WaPo leads with this:

Lede_Investigation__Police_shootings_-_Washington_Post

That’s a crap ton of space used to convey almost no information. Don’t get me wrong; I’m a fan of whitespace, and especially using it to emphasize something important. But that number doesn’t mean much without a lot more context. How bad is that? How does that number compare to other countries? Is that number rising year-to-year (not a lot of year-on-year data for this topic, unfortunately)? For those of us who actually live in the States and don’t think of the entire country as an undifferentitated blob, where are those deaths occuring? Are they close to me? How does my home state or city fare?

The Guardian’s summary does a much better job of conveying the shape of the problem.

Not so great: burying infographics in a horizontal slider with obscure icons

After wasting so much space up top, WaPo buries their most interesting graphics inside a horizontal slider.

Did you catch that? There’s a slider in the middle of the page. Maybe it’s more obvious in the mobile version where the arrows aren’t so much smaller than the content.

The Guardian uses tabs with text labels (“map”, “list”) to make it more obvious what the various visual options are. WaPo invented some icons to help you toggle between the different visuals:

WaPo_icons

Do you know what a smushed pixellated US map means? Nope, I don’t either. As a web development and design professional, I am a member of the “Look closely, click everything and find out what it means” club, but I’m pretty sure most web users aren’t that pokey. If your icons need a legend or if someone needs to click on them to deduce what they mean, you should probably just use text labels instead of icons.

Not so great: combining filters with data table

I’ll bet someone thought it would be clever and resourceful to make the data table double as a filter for everything else. Like many clever things, it turned out to be confusing, and it creates more problems than it solves. Now the filters are somehow both huge and obscure.

filter_dta

Did you even realize that clicking on the “female” data would filter to female? No? I rest my case.

Terrible: displaying monthly data in a way that makes it difficult to compare months

monthly_shootings

Why the hell would you do this? To save on vertical space? Is there an editor at the WaPo who still thinks people don’t scroll on the web? Display the months on individual lines so they’re easier to compare:

Chart5

It’s not really useful to show the number per month before you have at least two years’ worth of data – you can’t exactly infer seasonal trends from a single year. But it looks like WaPo built this thing to last and update over several years, so the monthly breakdown will become more meaningful over time.

Interesting: Squaring the states for an apples-to-apples comparison with geography in play

square_states

One problem with using a U.S. map to display rates as color-coded states is that it includes some information that’s not helpful and potentially distracting: land area. That is to say, if Texas and Rhode Island both have really high crime rates, Texas shows up as a big old lump of bad and Rhode Island barely shows up at all.

The Guardian avoids this issue entirely by displaying states only as tiles divorced from their geography. WaPo’s map of the states as equally sized squares attempts to address this issue and skirts the need to magnify Rhode Island and the other small states.

The one additional takeaway you really get from the squared states map is that the higher rates of gun violence are in the Southeast and West. There are some hitches to this ride, though. Something about the conversion to squares makes Kansas and Oklahoma look like part of the Southwest.

Thoughts?

The more important and controversial the topic, the more important it is to get the data and the visual data stories right.
What do you think? Who did it better, and why? Which choices work well and why or why not?

Lessons from Drupalcon 2014

Though I have been increasingly more involved in larger and larger web development projects throughout the last five years of my day job and freelancing, I’m still fairly new to Drupal development. So I was excited to win a very fortunately timed ESIPFed scholarship to my very first Drupalcon, Drupalcon 2014.

I expected that open data and open source culture would go together well, and indeed they did at Drupalcon. Though the greater science community has been glacially slow in coming around to the idea of “open,” the folks I met at Drupalcon who came from science institutions were all friendly and generous with their knowledge in the true spirit of open source.

In addition to meeting other Science on Drupal people and getting a better feel for the greater Drupal community – which I now know is awesome and welcoming – I arrived at Drupalcon with a parallel mission: learn more about how to work with lots of other people to make great websites in Drupal. Now that I’m moving into full-time professional web development, I feel a strong need to learn from and leverage the superior skills and experience of my colleagues.

For the non-developers in my audience: getting from “I need a website” to launching one takes more than just technical skills. The difference between writing code and delivering a working website that meets or exceeds a client’s needs is like the difference between telling people you like bats and researching and implementing a plan to stop the spread of white-nose syndrome between the bat caves of North America.

To me, actually building a website is the smoothest part of the process. In my experience, the bulk of my time and effort goes into understanding what the client and website users want or need, communicating what you or your team can deliver and how you will deliver it, and adjusting your delivery as things inevitably go awry.

I set out at Drupalcon to learn more about tools and processes that can help make these “soft” tasks easier and more effective in a Drupal development setting. Two of the sessions where I picked up many important tips were the Axure prototyping workshop and the Weather.com migration case study.

Why prototype a website?

Prototyping is a crucial communication tool for the design and testing process, especially for larger and more complex websites and features. It can be as simple as boxy drawings on paper or as involved as a full-scale, interactive prototype that is just a few tweaks away from the final product.

In any case, having something for someone to look at, maybe even click through, helps you gather useful feedback and adjust your design accordingly. Lots of early feedback can help you avoid wasting time building things that are wrong for the client. A prototype is one of several tools you can use to help prevent design conversations like this.

Should I prototype in Axure, Drupal, or something else? It depends.

Dani Nordin, who led the Axure prototyping session at Drupalcon, discussed some of the pros and cons of prototyping directly in Drupal. On the one hand, iteration is much slower in Drupal than it is on paper or in a lo-fi wireframe. On the other hand, Drupal, with its highly structured content models and interrelated everything, can be tricky to emulate in a prototype.

So when do you prototype in Drupal, and when do you use something else? First, you have to consider what the prototype needs to be able to do. Nordin pointed out that you don’t have to prototype everything, and some parts of a website are more important to prototype than others. You might focus on prototyping complex or unusual functionality that is difficult to translate into words, unique content, or chunks of development that would be a huge pain to undo if they didn’t work out.

Another prototyping question: who are you prototyping for? Axure, for example, is a sophisticated tool for quickly constructing and sharing interactive prototypes that can give you a pretty good feel for the navigation, content structure and interaction within a site – if you understand what you’re looking at.

Axure Page Layout
Designer’s view of an Axure prototype, including some navigational and interactive elements and reasonable mock content. From axure.com

I’ve been on the client end of a project that used Axure extensively for wireframing and information architecture review. I get the sense that Axure prototypes:

  • Are quick and easy to throw together
  • Generally facilitate communication between development, design and user experience professionals
  • Need a lot of accompanying narrative and potentially some real content to elicit useful feedback from clients

It’s easy to see the professional utility of tools like Axure at the lo-fi wireframe and interaction prototype stage. I am not, at this stage, mucking around with the colors of boxes and the sizes of drop shadows. As an information architect or interaction designer, I’m thinking about what kind of content needs to go where, what it should do and how it should relate to other content in the site. As a developer, this is exactly the kind of information I need to start building a site.

With my user experience hat on, I can further appreciate having the ability to use Axure to prototype and test quite a bit of site navigation and interaction before any part of the site is even built. Early testing can help me avoid a lot of headaches later on, such as when I realize late in the process that I need to totally rearrange the content and break all sorts of relationships and structures that I’ve already developed around.

But you may find it difficult to engage clients with lorem ipsum, empty boxes and Heading 1 placeholders. For people who aren’t design- or development-minded, there just isn’t much in a lo-fi prototype to react to. Much of your most important client feedback may not come your way until your clients have seen real content in your design.

And realistically, real content in a real design may not (probably won’t) happen until after you’ve already made some design decisions and development commitments. As Nordin put it during the session: “Who has the content before they start building? No one? Okay.”

(Aside: In case you are wondering how one designs a website without the content (words and pictures) that go into the web pages: one way is to use some sort of templating system. Content management software like WordPress or Drupal is essentially a sophisticated website templating system that web developers can customize. Someone with zero web development experience can then use these templates to create and publish new pages within a website.

Templates can be as straightforward as nice borders surrounding big blank boxes that people fill in like stationery. More abstracted templates anticipate the structure of future content and include spaces and behaviors for each distinctive piece of content.

Designing for a content management system is kind of like making a packing list for a 3-month backpacking trip. You want to pack as little as possible and re-use as much as you can; but you also want to pack enough to be reasonably comfortable and well-prepared for what lies ahead.)

Finally, if you’re building a Drupal site, you are probably building a site for clients who are going to be managing the content themselves. Drupal content management being what it is, your client may sign off on a prototype that leads to you developing something that turns out to be a nightmare for them to manage. To really get at the implications of certain design decisions, you have to test the Drupal content management interface with the content managers. That you can do only in Drupal itself.

Going big with small victories

I love studying how people solve big problems. Problems that are big in terms of sheer volume, and big in terms of complexity.

So I was fascinated by the technical implications of this truly big problem: migrating the world’s largest website onto Drupal.

The Weather Channel’s website has to serve up more than 2 million different and constantly changing forecasts in the U.S., and lots and lots of bandwidth-hogging media to boot. What’s more, the system has to be able to hold up to huge surges in traffic so it can continue to provide crucial emergency information during major weather events.

The answer to all these challenges wasn’t wholly contained in Drupal. Part of the performance solution that MediaCurrent and weather.com came up with was to chop up page templates into modular content and serve up different pieces simultaneously to speed up delivery. In other words, when you load a page on weather.com, these three things happen:

  • Drupal delivers a page template
  • AngularJS and edge side includes rewrite the page as it goes out to the browser
  • Data services layer delivers additional data to populate content

The page templates are optimized for caching and are cached at several locations around the U.S. by their content delivery network, Akamai, to further speed up performance.

Performance aside, it seems like the most compelling reason TWC went with Drupal was for its powerful and flexible options for content management. Among other things, the team was able to develop an ingenious system that blends widgetized content and custom additions to Panel module functionality. Content managers at TWC can use a grid system to configure re-usable templates and design variants, and it’s all designed to adjust to different devices and maximize re-use of content and templates.

So, I digressed into cool technical challenges. But I also had major takeaways from this presentation that weren’t technical: the importance of relationships and small victories.

First of all, the solutions architect from MediaCurrent said he invested some time in getting to know the in-house web developer team to better under the resources they had available. During the session, the rapport between him and the development lead at TWC was obvious.

Second of all, this team churned out a small side project early on: switching everyone over to content entry with Drupal. The switch led to a drop in support needs and an uptick in publishing velocity that validated their choice of Drupal for their platform – a quick win.

Website migrations, especially huge website migrations, have a lot more baggage and paint points to them than building a similarly-sized website from scratch. The more ground you’re covering with more people, the more can go wrong. I’m sure having an early, bite-size victory under their belts injected some momentum and positive energy into the project that buffered the team against the more grueling tasks that came later and are yet to come.

Getting it right with a project this large and complex takes a lot of close communication and careful planning. In that context, it pays to invest in good relationships and communication with your collaborators on large projects. You’re going to be stuck with those people for a while, and you’re going to be stuck with them even longer if it doesn’t go well because you’re not talking to each other.

As I mature as a developer and graduate to tackle ever-bigger and riskier projects, I’ll keep reminding myself that it’s important to aim for small victories and invest in my working relationships.

Doing something about discouraging data

I blogged earlier about some discouraging data on the involvement of women in professional CS. I also bemoaned the eye roll-inducing culture of computer geeks that I encountered at university. And I wondered (offline) about how and why the tiny minority of women programmers was holding up.

A few months later,  I’m thinking very seriously about joining their ranks.

What got me thinking this way?

Meeting, learning from, and learning with a ton of cool female developers, courtesy of Girl Develop It Boulder. Through GDI workshops I’ve re-discovering the fact that I love creating things with code. I love it so much that I can get lost in it for hours without noticing the time.

Don’t get me wrong; working in communications has been fun and challenging in its own ways. I’m deeply grateful to have picked up a lot of experience in project planning and people management. But I’ve been feeling for months that it’s high time to put those very deliberately earn “soft” skills to use on more complex technical and social challenges with a dedicated team. I’ve spent my entire career at academic/nonprofit/government institutions with old-fashioned management – it’s time to leap into the modern business world and find a place that adequately exploits my combination of technical savvy and immersion in people, culture and connection.

Fortunately, living in the heart of the Boulder tech community, I don’t have to leap too far (at least not in a geographic sense).

At this point, I’ve taken classes in HTML5/CSS3, Javascript, Git, Python and UNIX  server management, and I’m about to start a comprehensive bootcamp in web development. I’ll use what I learn to rebuild and streamlined this site. This WordPress theme has served me well for a long time, but it is time to move on and up!

WordPress to Drupal 6. Magic! Test post. Now messing with title.

UPDATE 2: Successful test! Manual import showed only one “updated” node, the one corresponding to this blog post. Content did indeed update to match first update. The other 9 TEST Top Stories were “new”. All 10 retained original “Authored on” date. Next test is to change the title of the post from “WordPress to Drupal 6. Magic! Test post.”It shouldn’t create a duplicate as I have set the GUID to be the guid!

UPDATE: Realized that Content > Feeds and Site building > Feed importers correspond to different modules. At this point, duplicates are resulting from the FeedAPI module (Content > Feeds) and the Feeds module (Feed importers) operating simultaneously. Now testing how the “replace existing node” setting in Feeds works. See if this update shows up in TEST Top Story on manual import!

This post should show up as a post of content type TEST Top Story on the dev version of my company’s Drupal 6 site that I’m working with a web support team to troubleshoot. I duplicated the feed importer configuration on the site and set it up to ingest content from my blog and convert each post into a node of type TEST Top Story. I also added a custom image node to my feed like the one that I added to my company’s WordPress blog.

Please note that I changed the parser to look for a GUID to populate the GUID field instead of simply duplicating the link. I’m hoping this will address the content duplication problem we’ve been having.

Content in the existing Top Stories feed and sorts by Authored on date – which corresponds to the day and time of content import. This should not change because we don’t want blog posts suddenly jumping to the top of Top Stories when we update them.

You should be able to manually import content into the TEST Top Story content type by going to Content list > (filter to TEST blog feed type) > TEST blog feed feed and choosing the “Import” tab.

I’ll add a random featured image here for fun.

http://mousebreath.com/2012/08/13-cats-fighting-for-world-and-dog-domination/
http://mousebreath.com/2012/08/13-cats-fighting-for-world-and-dog-domination/

Don’t worry, I will delete when test is done.

The science of science communication, summarized

No, do not do a Google image search for "scientist" if you do not want to be depressed and outraged.
Is this what scientists think of “the public”? Stock Photo from Getty via jacks of science.
I’ve identified as a scientist for most of my life, despite leaving at the end of my master’s to pursue a career in science communication. The biggest challenge for me in that career shift – next to learning to meet a zillion little deadlines every day in lieu of huge ones once every few years – was learning to be present and relatable.

By default I am the classic cerebral, shy, white-coat nerd type. I’m still constantly fighting my own tendencies to live inside my own head and spew evidence faster than others can process it – tendencies that the culture of academic science enhanced in me, even socialized into me and my former science colleagues. I think I’ve finally managed to internalize the notion that I’m not just trying to reach “the public” with science; in fact I am part of “the public.”

The point is, sometimes scientists need to be reminded of their own humanity. And who better to do that than humanities scholars?

There’s a whole issue of PNAS out dedicated to the science of science communication, based on a meeting of the same name that I at one point was dying to attend. It turns out many of the sessions were recorded and you can still view them online at the meeting website. Or you can go to the 2013 meeting!

I doubt I will make it to the 2013 meeting. But I have the videos and the special issue of PNAS to relish. One piece from the special issue, Communicating science in social settings, includes a summary and discussion of assumptions scientists often make about “the public” and “the media” that, based on lots of social science studies and extensive survey data, deserve further scrutiny. Here are my takeaways from that section:

  • 1. More information is not better.
    Resist the urge to summarize your entire body of scientific knowledge in one conversation. Make one point. Make it quickly and make it well.
  • 2. The public still trusts scientific institutions.
    There goes that excuse.
  • 3. Stories are much more powerful than lectures.
    How well do you remember the last three movies you saw? How well do you remember the last three two-hour lectures you saw?
  • 4. No one totally ignores his own worldview when interpreting scientific information.
    That includes scientists.
  • Discouraging data: women in CS and IT

    In making my mark in the realm of data and information visualization, it will probably do me good to become a better and more knowledgeable coder. I am now looking into pursuing a little more CS education, and am excited about diving into edX MOOCs in computer science (remember when edX was OCW?).

    I’ve never shied away from things technical. I enjoy every opportunity I get to learn new software and programming languages, and nothing sucks me into an-all absorbing work cave as effectively as a new Javascript, HTML or CSS coding challenge. I’m even considering diving much deeper into CS than just the basics. After all, the entry level pay for a computer scientist or software engineer is at least 1/3 higher than the entry level pay for people in my current line of work.

    However, these data give me pause:

    Looking at the BLS numbers, it is interesting that these professions attract more women (as a percentage) than software engineers (20.2%):

    • Bailiffs, correctional officers, jailers (26.9%)
    • Chief executives (25.0%)
    • Database administrators (35.3%)
    • Biological scientists (45.1%)
    • Chemists and materials scientists (30.0%)
    • Technical writers (50.4%)

    Even the professions that are said to have a glass ceiling (such as CEO) have more women in them than software development. Based on the number of science positions listed in the BLS data with substantial numbers of women in them, it is clear that the myth that women are afraid of math or science is just plain wrong (even if less than 1% of mathematicians are women). And given the bizarre outlier of DBAs at 35.3%, and technical writers at 50.4%, we can see that women certainly do not dislike computing fields in general.

    IT gender gap: Where are the female programmers? by Justin James

    Now I remember why I wasn’t attracted to CS at university. I would try to strike up conversations with computer geeks, and then get shut out of the weirdly intense technobabble tournament that every computer geek conversation eventually turned into. My work is now and was then a huge part of my life; but I learned very early that the people I surround myself with are at least as important as the work that I do. At the time, a choice of major seemed like a choice to surround myself with people like the people in that major for the better part of my adult life.

    I can’t be the only woman who looked at the majority culture of computer programmers and thought, is this it?


    #ESA2013 Ignite: Open Science

    I had sooooo much fun organizing my first Ignite talk session. I would do it again in a heartbeat. I met several excellent people and learned a lot about data, R and collaboration tools. I am also super proud of how awesome my speakers and moderator are, and how thoughtful and stimulating the discussion was.

    So I’m sharing it all like a proud session mama. Here are the session details from the program and, when available, the talks themselves:

    Sharing Makes Science Better

    Organizer: @sandramchung | Moderator: @jacquelyngill

    Scientists too often labor alone. The need to closely guard ideas during the race to immortalize them in professional publications can make the practice of science crushingly lonely and ill-informed by tools and knowledge that could make science easier and better. Occasional scientific meetings are often the only opportunities to share ongoing work and connect with colleagues outside of one’s immediate working environment. But there’s a fertile online science ecosystem of innovation, collaboration and mutual support that carries on all year round, and its lifeblood is a network of scientists and science lovers who openly share tools, data, knowledge and ideas that help all researchers to do stronger, better, faster science. The rapidly growing open source and online science communities suggest a new model of doing science in which we build our work on tools, data, knowledge and ideas that are freely offered and contribute our own in return. This session features several free and open-source tools that ecologists have created specifically to help fellow researchers do the work of ecological science, as well some other tools we didn’t create but have tried and found enormously useful. We encourage our colleagues to try them, improve upon them, and perhaps most importantly, share what they’ve learned so that others can benefit as they have.

    IGN 2-1

    Big Data in Ecology

    | @ethanwhite, Biology, Utah State University, Logan, UT

  • Slides and text
  • Increasingly large amounts of ecological and environmental data are available for analysis. Using existing data can save time and money, allow us to address otherwise intractable problems, and provide general answers to ecological questions. I will discuss why we should be actively using this data in ecology, how to get started, and give examples of what can be accomplished if we embrace an era of big data in ecology.

    IGN 2-2

    EcoData Retriever – automates the tasks of fetching, cleaning up, and storing available data sets

    | @bendmorris, University of North Carolina, Chapel Hill, NC

    Ecology often relies on data that has already been collected, and an ever-increasing amount of biological and environmental data is now available online. However, it can be difficult and time consuming to compile synthetic datasets from data files stored in various online repositories or research web sites. The EcoData Retriever is a community-centered tool that automates discovering, cleaning up, and organizing ecological data into the format of your choice. I’ll speak about problems solved by the Retriever and touch on future directions aimed at further utilizing community effort and the web to automate ecological data access.

    IGN 2-6

    R-based tools for open and collaborative science

    | @recology_ (Scott A. Chamberlain), Department of Ecology and Evolutionary Biology MS 170, Rice University, Houston, TX

    Open science is the practice of making the elements of scientific research – methods, data, code, software, results, and publications – readily accessible to anyone. While this has great potential for advancing research, the absence of an open science toolkit prevents open science from being more widespread. We are building bridges between data (e.g, Dryad) and literature (e.g., PLoS journals) repositories and the open source R software, a programming environment already familiar to many ecologists. These bridges facilitate open science by bringing together data acquisition, manipulation, analysis, visualization, and communication into one open source, open science toolkit.

    IGN 2-7

    Social media for scientific collaboration

    | @sandramchung, NEON Inc.

    Sharing Makes Science Better: Social Media for Ecologists from Sandra M Chung on Vimeo.

    Scientific research is about the nurturing of knowledge and ideas. And to knowledge- and idea-lovers, the Internet is a door to an infinite candy store. Social media provide a means to quickly access exactly the online knowledge you want – by filtering the grand store of information through interaction with the people, topics and communities that matter to you. I wouldn’t stop at just knowledge consumption, however. Sharing your science online can connect you with mentors and collaborators, sharpen and deepen your science, hone your communication and teaching skills, and even earn you funding.

    IGN 2-9

    The power of preprints: the open publication project for ecologists

    | @cjlortie, Biology, York University, Toronto, Canada

    Ideas are free but not cheap. Peer-reviewed publications are still the major form of accepted dissemination of ecological ideas. Even with open access however, this communication modality is outdated. Discussion, feedback, transparent review, versioning, ranking, and articulation of both idea development and peer-review are needed to accelerate scientific discovery. A new communication venue is proposed herein: archival of open access pre-prints similar to arXiv but with annotation, review, and discussion. Think stackoverflow + arXiv for ecologists; not a final step in the evolution of scientific communication but an affordable idea we need to explore.

    Add your Twitter username to your conference badge

    R674372838_proof

    A Twitter sticker in action
    A Twitter sticker in action

    I designed these Twitter stickers in July 2012 to hand out at the ESA meeting in Portland during and after the social media workshop I ran with Jacquelyn Gill. They’re getting more and more popular (as of ESA 2013, I’ve handed out nearly all of the original 300 I printed) so I thought I’d share the artwork and information on how to order them.

    I used Sticker Mule to order custom 3.5″x.75″ rounded corner stickers that fit nicely below your name on your scientific conference badge (right).

    ADDENDUM 2013.08.12
    Feel free to re-use, modify and share the design. But please do not sell the stickers. They are for only personal and academic use.

    MOOCs are not the end of education; they are a game changer

    Some teachers are brilliant and talented coaches and mentors. But there’s still way too much focus on information delivery in higher education that just doesn’t make sense in an era when so much information is already freely available. The Powerpoint lecture generation is long overdue to move on and acquire different skills, and MOOCs are an important disruptive innovation that I hope will hammer that point home.

    Many traditional professors could easily be replaced by a recording, a la Real Genius. As the author points out, MOOCs are good for information delivery, and many professors do little or no more than deliver information – the “real” learning happens in the problem sets and the projects and essay drafts, in the performance, feedback, revision cycle, which many universities and professors relegate to lowly paid graduate or even senior undergraduate students and to which they devote token attention.

    MOOCs, then, are cost-effective substitutes for lecturers. Why pay someone every year to deliver the same lecture to a limited-capacity room when you can simply pay the same person once to record the lecture and distribute it to a zillion more paying students almost anywhere in the world?

    With the cost of higher education rising as rapidly as it has as of late, students and parents are demanding more value in the education services provided by universities and college. Many educators are going to have to change the way they teach to demonstrate their relevance and value in the rapidly changing education marketplace, just like folks trained decades ago in other professions are now facing the need to retrain and modernize their skills to stay gainfully employed.

    In the Internet era, when information is so cheap and easy to get, many teachers still maintain an antiquated focus on information delivery. Think of information like open source software. The raw materials are free, but you need a lot of training or practice or both to get them to work for you. So teaching with the Internet ought to be more like delivering a value-added service for free and abundant information resources.

    I, like many others, would expect education to shift away from information delivery and more toward coaching students in finding, evaluating, using and generating information.

    Is the attrition rate for online courses appallingly high? Yes. But that’s not the point. If you take the savings you reap from not having to pay the lecturer to deliver the same lecture every semester and invest it in a rich layer of coaching and mentorship on top of that online course, you might find that that course does more of what we in the real world need and expect it to do: prepare students for a world where information is cheap, and judgment, creative insight, analytical and collaborative skills are the real prizes.

    Adding a custom image node to a WordPress RSS feed

    I’ve been working with a web development firm to update the homepage of my employer’s website. One of the features of the updated homepage is a nicely styled newsfeed that pulls in both news content and the latest posts from our externally hosted WordPress blog. The developers asked me to add a custom node to the RSS feed of the WordPress blog (which I manage) with the featured image URL in it so they could embed that image in a news feed on the homepage. In other words, they wanted me to tweak the RSS feed so it included this bit of markup in each item:

    

    After trying about a dozen plugins off the shelf and poring over many hanging support threads that never resolved the issue, I realized that there isn’t an up-to-date WordPress plugin out there to do this. After consulting the WordPress codex Function Reference and hacking two plugins that get sort of close (SB RSS Feed Plus and Featured Image in RSS), I figured it out. Here I share with you the successful results.

    UPDATE: Sage Lichtenwalner has suggested two much better ways to accomplish the same results. The fastest and easiest one is to go to the third link in his comment and copy and paste the image node example code straight into your functions.php file (in your child theme folder, of course).

    1. Define a new function that outputs the URL of the post’s featured image.

    One way to do this is to append just the function code below (everything from the word “function” to ) to the end of your theme’s functions.php file (which is editable from the theme editor in the WordPress dashboard). A beter way to do this, if you’re using a child theme, is to define the function in a new functions.php file in your child theme main folder. WordPress will append it to the functions defined in the parent theme and in the core of WordPress. Here’s what my functions.php file in my child folder looks like:

    < ?php
    /**
    *Outputs the featured image URL for use in RSS2 feed
    *
    */
        function feed_getFeaturedImage() {
            global $post; if( function_exists ('has_post_thumbnail') && has_post_thumbnail($post->ID)) {
                $thumbnail_id = get_post_thumbnail_id( $post->ID );
                $thumbnail_url = wp_get_attachment_url($thumbnail_id);
            }
            return ($thumbnail_url);
    }

    2. Edit your WordPress RSS template (/wp-includes/feed_rss2.php) to include the new node and call the new function you’ve created.

    In my case I’m simply going to the paste the following code in feed_rss2.php wherever I want the new node to appear:

    

    As it stands, this solution requires the user to customize code (the RSS template) that may get overwritten every time WordPress updates. I would like to figure out a way to do this via plugin or without editing anything but child theme elements. But for now, this does the job.

    I don’t know much about PHP, other than that it doesn’t appear to be all that different from the other programming languages I’ve learned. I’m also not a WordPress guru. So I leave it to the WordPress and PHP experts out there to amend my solution with more best practices.