Market Research (Over Lunch) (via Text) (from the Roof Top)

As with all good, open source ideas, it’s entirely possible someone is already making headway with it. Spent a couple days reaching out to colleagues working in philanthropic intermediaries finding out who’s working on open data solutions for grantmakers. Check it out:

Spent a lunch hour with Colin Lacon, CEO and President of Northern California Grantmakers telling him about Open hGrant’s goals. NCG has been shining a light on  philanthropic transparency through a couple of its grantmaker education strands, and points to signature efforts by the William and Flora Hewlett Foundation, David and Lucile Packard Foundation, Gordon & Betty Moore Foundation and others to implement whole suites of transparency technologies. It was no surprise to see NorCal grantmakers listed in the founding group over at The Reporting Commitment, and it gave a little pause to see the initiative included 17 grantmakers in a field of more than 80,000 nationwide. What are grantmaker barriers to entry in the open data reporting field?

Not surprisingly, Colin didn’t think it was a matter of technology acumen or access, though he knew a simple tool could be vital. We discussed how transparency is first and foremost a philosophy, a way of seeing the work of philanthropy as a shared effort to transform communities. Open hGrant can be rightly seen as a tactic way down the line, a means to an end.

Connected with Val Rozansky, Director of Knowledge Services at the Forum of Regional Associations to find out if they had any open data tools irons in the fire. Val’s an architect of shared technology and taxonomy among regional associations –a beautiful combination of practical economies of scale and cooperative standards. Val’s big question was when Open hGrant would be ready for Drupal. Next on the list.

The March confab with Janet Camarena, Director of the Foundation Center‘s San Francisco office accurately pinpointed all of the issues we’re touching while developing the Open hGrant plug-in: do we need to emphasize hearts and minds or bits and bytes? How do we show the potential of big data initiatives when the data sampling is small? How do we attract philanthropic leadership, and technical expertise, into the same project?

As Long As We’re At It, What If…

Over here at the Walter & Elise Haas Fund we’re porting over a website from Cold Fusion (the architecture, not the hypothetical nuclear reaction) to WordPress.  Our partners in crime are the folks at Mission Minded, the branding firm that works exclusively with nonprofits.  The planning talks were going smoothly when I suddenly had an idea:

What if…

at the moment we put in the effort to rewrite our searchable grants database that lived on our old site…

we wrote it so it the data was machine-readable by leading open data initiatives, such as hGrant?

You know…

Like a WordPress plugin.  As simple as Click. Activate.  Join the ranks of real-time, open data grants data publishers.

We talked about how how a tool like this, if kept simple, could give us an incredibly low bar for participation in what has heretofore been two difficult games in our shop:

  1. Building and maintaining a searchable grants database on our website. Depending on the web architecture, the source of data (might that be an intractable grants management system?) and the staffing, we spent a lot of time and money developing a web app that grantseekers and media could browse and learn from, and we spend a lot of time and money making sure that app continues to work as web technology, and the way we talk about our grants, changes. Keeping just two years of past grantmaking data out at involves a lot of effort.
  2. Making our grants data to larger initiatives that aggregate giving data to chart philanthropic impact.
    Even when we have searchable grants on the web, we know it’s not being presented in a way that can be easily used by others, for example, the Foundation Center’s Reporting Commitment.  Our taxonomies are different, our HTML syntax is different, we’re focusing on different data types… it’s discouraging enough that we haven’t joined that initiative, despite its potential.

Might work. A lot of devil in the details ahead.