Sunday, August 9, 2009
Preparing for Afghanistan Elections and Humanitarian Efforts
Some fellow Google engineers and I participated in the Summer 2009 Nangarhar PEAK activity at Camp Roberts near Paso Robles, CA. We got to work with some of the great crisis management and neogeographer minds for a few days as we prepared government provided data for Todd Huffman, whom works for an NGO to take to the field.
Todd will be observing the upcoming Afghanistan elections and will be using the technology we glued together this week to help him do so, as well as continuing his many nation-building NGO efforts.
The bulk of the data was provided by the National Geospatial-Intelligence Agency, but it was provided in a raw format, with no geospatial viewer.
The Google Earth Enterprise team knew we could help out, and we processed the imagery for Todd through Google Earth Fusion, and then published the data as a map, and 3D globe to a Google Earth Enterprise Server running in an Ubuntu Virtual Machine connected to a mac-mini which was provided to Todd to run GeoCommons by FortiusOne.
Within a few hours of the first day, we had our imagery tiles being consumed by many web-based Geospatial applications that Todd would also be able to take to the field with him.
Most of the applications were already configured to work with Google Maps on the Internet, so they knew how to utilize our tiles. In many cases, only a few lines of code and new URLs had to be added to their software packages to work with our Enterprise version of Maps which can be used on or offline.
First - Mikel Maron of the OpenStreetMap Foundation, Josh Livini of Umbrella Consulting, and Michal Migursky of Stamen Design Walking-Papers.org and got our imagery tiles feeding in as the basemap for the Walking Papers slippy map.
Michal built Walking Papers for users in mixed tech environments to print out hard copy maps from OpenStreetMap and bring them out in the field to take notes and collect data. Users can then bring their paper maps into Walking Papers by submitting a scan of them, and their notes and annotations are automatically re-georefferenced to the map thanks to some QR code magic.
This alone is a substantial advancement - keeping the loop of geospatial production working without ever loosing georeferencing and causing users to have to do extra work.
Todd has already mapped out a lot of Jalalabad, but is going to be helping the locals to do a lot of the mapping in their homeland themselves to build out the data on OpenStreetMap.org. Walking Papers will allow him to orchestrate this without having to have a lot of computers or other tech gear to worry about - just print a map and give it to a local expert to annotate or send them out in the field to mark points of interest and street names. Since many hardcopy maps are going to be printed, the team realized there was an opportunity to use them for multiple purposes.
Josh injected some Python scripting in the Walking Papers workflow which creates an arbitrary grid on the image, and has the ability to apply transparent tiles from the Google Earth Server or from OpenStreetMap thanks to Mikel Maron's work over the base imagery. The grid is intended to allow someone in the field, with no GPS, to use a simple, cheap cell phone and report incidents via SMS by utilizing one of the other technologies that integrated with the Google Maps - InSTEDD's GeoChat .
The grid Josh designed was optimized to make it easy on the end user to report back position over SMS with minimal character usage - and it is scale agnostic. The grid pairs to the map ID on the Walking Papers map, and then GeoChat translates the coordinates on the fly and can display them as an overlay on a map and be read by users in multiple formats; Lat Lon / MGRS / USNG , etc.
Walking Papers was ported to run offline by Josh and Michal and now resides in a local capacity on the same virtual machine and hard drive that is running the Google Earth Enterprise Portable server.
The guys from Sahana and Development Seed were able to bring our tiles into their Content and Disaster Management systems as well, and Andrew Turner was able to log in to his donated server remotely and reconfigure GeoCommons to use the tiles as well. Now all of these platforms have the code in place to pull tiles from Google Earth Enterprise servers in the future when the next disaster strikes.
All paired together, a mobile deployer or crisis responder can take a very light weight but powerful suite of geospatial utilities in the field for stand-alone use, or connect to a network and serve their data out with other local and remote users - with or without Internet connectivity.
I was truly impressed with this group - we knew what our objective was and knew where our different technologies could come together and provide the needed solution. For me, it was wonderful to see a compelling use of our technology and watch it work pretty much flawlessly with open geospatial technologies as we've been promoting. Not bad for a few hours of hard work and a group vision.
*UPDATE:
Mikel Maron's Summary of the Work @ Brain Off
http://brainoff.com/weblog/2009/08/10/1410
http://brainoff.com/weblog/2009/08/10/1435
Development Seed Blog:
http://developmentseed.org/blog/2009/aug/07/integrating-50-centimeter-data-national-geospatial-intelligence-agency
http://developmentseed.org/blog/2009/aug/05/data-collection-simulations-field-camp-roberts
Eric Gunderson's Photos:
http://www.flickr.com/photos/developmentseed/sets/72157621841118753/
Tuesday, March 3, 2009
Open Geoprocessing: Let's Share Some Code
Since that time, I've been demonstrating the demo at the Google Earth Enterprise Users Conference, the ESRI Federal Users Conference, and the NOAA GeoTools conference, and lot of folks have asked for the source code.
I always wanted to make this code open and available for all to implement and improve on, so I've released it to a new Google Code Site: Open Geoprocessing.
I'm looking for anyone interested in flushing out some of the functions to make it more robust, but also looking to move on from utilizing the ESRI JS API for the geoprocessing to doing the same type of geoprocessing using some completely free Open Source Geo tools.
Here's how I see this working, let me know if I'm totally off in the wrong direction:
Tuesday, February 17, 2009
ESRI Geoprocessing in Google Earth
OK - I guess what you've been saying is that while hundreds of millions of people have downloaded Google Earth all around the world, and have used it to prepare for and respond to natural disasters, find drug farms , protect the rainforest , bring attention to and spatially explain the Crisis in Darfur, even do Imagery Intelligence Analysis - that all of this is "just qualitative analysis."
Tough crowd.....
So despite all of those things, I still hear that Google Earth isn't analysis, and this almost always comes from staunch GIS shops.
Hey, I understand - I'm a GIS guy too. I guess what you're getting at is that unlike the GIS systems you've always used, Google Earth is more of a Geospatial Exploration System, and you want to be able to do some qualitative analysis - some geospatial analytics.
Ah, maybe some Geoprocessing?
Geoprocessing is a GIS operation used to manipulate GIS data. A typical geoprocessing operation takes an input dataset, performs an operation on that dataset, and returns the result of the operation as an output dataset. Common geoprocessing operations include geographic feature overlay, feature selection and analysis, topology processing, raster processing, and data conversion. Geoprocessing allows for definition, management, and analysis of information used to form decisions.[1]
OK, so what if we could extend that Google Earth user experience to be able to leverage your current or future geoprocessing capabilities?
I first saw the potential for this, before coming to Google, when Jack Dangermond and John Hanke co-presented at Where 2.0 last May.
They showed a great little demo of the Google Earth client communicating with the ESRI ArcGIS Server, but you could tell the communication was a little bit of a kludge.
They were using the embedded browser and the center of the Google Earth ?BBOX= NetworkLink information to pull the demo off.
That was before the Google Earth API (3D Google Earth browser plugin) was released.
Now however, we can actually build this application utilizing simple JavaScript and capturing key user events, then passing these events as real geometries over to the ESRI ArcGIS Server JavaScript API and do it all right in the web browser!
Durring that demo however, I was still unable to conduct the second required geoprocessing task on any drive-time-rings that were complex or that covered a large area - which was most of them.
It turns out, that there is a pretty significant limitation on geometries that you send Geoprocessing task queries on the ArcGIS Server if you're not running your application on the same server as the ESRI software.
It bombs out if the geometry in the query exceeds 2,000 characters (which is a browser limitation) and the only way around this currently is to complicate the ESRI JSAPI by deploying a proxy in ASP.Net or Java / JSP...
This is a shame, it really makes things more difficult than I'd like them to be for interacting with ArcGIS Server services - I don't think I should have to mess with Tomcat configurations to get things working...but alas, we do for now.
Ok, now off to the ESRI Federal User's Conference - I'll be demoing this application at the Google booth, so stop by and say hello and check it out and tell me how much you hate it in person :)
I promise to have some well commented source code and a link to try out the application up on Google Code ASAP!
Next up - Geoprocessing with some Open Geospatial tools on the backend.....stay tuned.
Friday, February 6, 2009
Is it Just Me, or does ESRI's ArcGIS Server Mashup Challenge Feel Like a Rigged Scam?
For some reason or another, I was toying with the idea of going to the ESRI Developer Summit this year.
Unfortunately, after looking at the agenda it looked less like what I would consider a developer summit and more like an indoctrination into more ESRI tools.
Wish they had a track that was for those that wish to leave ESRI on the edge of their geospatial world; leveraging the ArcGIS Server API's where they are useful, and writing the majority of their code against more open platforms.
Speaking of that, I've been having a lot of fun working with some of ESRI's demo ArcGIS Server API's, and have a pretty sweet mashup I hope to have launched by the time of the ESRI Federal User conference in mid February.
As I was stumbling around the Developer Conference today, I came across an advertisement for the Mashup Challenge (linked in the picture above).
1st Place: $7,000
2nd Place: $3,000
Nice, I could set up a sweet OSGeo Rig with $7k.....
But, looks like I'm not eligible:
3. Eligibility: This Contest is open to all developers who are the legal age of majority in
their country of residency, including Sponsor’s business partners, so long as
applicant/applicant's organization is a licensed user of ArcGIS Server 9.2, ArcGIS
Server 9.3 or a current ESRI Developer Network (EDN) subscriber prior to the
Contest Period, except for those developers who are residents of Burma, Cuba,
Iran, Libya, Malaysia, North Korea, Sudan, Syria, Province of Quebec, and where
prohibited by national, state, provincial, or any other governmental laws or
regulations.
My org isn't a licensed user of ArcGIS server, and we're not in the EDN...bummer.
What would I need to spend to be in the running to win that $7k???
Also, looks like ESRI will own the code that I or anyone else would submit, and use it for whatever purposes they sit fit....like a demo to sell more ESRI licenses....
BY SUBMITTING THE CODE SAMPLE, THE APPLICANT REPRESENTS AND
WARRANTS THAT HE/SHE HAS ALL RIGHT, TITLE AND INTEREST NECESSARY TO
GRANT THE SPONSOR THE WORLDWIDE, IRREVOCABLE AND UNRESTRICTED
RIGHT AND LICENSE TO ADAPT, PUBLISH, USE, EDIT, AND/OR MODIFY SUCH
CODE SAMPLE IN ANY WAY AND POST THE ORIGINAL CODE SAMPLE ON THE
INTERNET OR USE THE ORIGINAL CODE SAMPLE IN ANY OTHER WAY AND
AGREES TO INDEMNIFY AND HOLD SPONSOR HARMLESS FROM ANY CLAIMS TO
THE CONTRARY.
No thanks. - that's a lot to ask for for the CHANCE to win $7k.
Thursday, February 5, 2009
Will Latitude Succeed Where Spatial Social Networking Trailblazers Failed?
When I first saw what is now released as Google Latitude a few months after joining Google, I'll admit I was a little surprised.
I guess my first exposure to this concept was the awesome Loopt presentation at Where 2.0 in 2007.
I was a Verizon subscriber at the time, with a clunky Windows Mobile 5 phone and a pain-in-the-butt Bluetooth GPS and there was no Loopt application or service on my platform.
It wasn't until BrightKite came along that I was able to painfully start sending my position out to the world and make stalking me a little easier for everyone.
I liked BrightKite, and liked how I was able to integrate the position with Facebook and even Yahoo's FireEagle.
I loved the idea of sharing where I was with friends but there was a problem; my friends looked at this technology and said "Dude, that's creepy, I'm not telling you or anyone else where I am all the time."
This is a problem - the geo-nerd in me loved the concept, and my normal friends hated it.
Then, in the spring of 2008, I JailBroke my friend's original iPhone, and checked out a rogue app called Twinkle, a Twitter application with built in GPS / Location support.
Instantly, I saw there was a "Nearby" tab, and I saw dozens of folks around me in San Francisco posting pictures, and Tweeting away.
This JailBroken iPhone App drove me into an Apple-Fanboy frenzy and I was hanging on every single rumor about a then-speculated iPhone 3G with even better GPS support.
Twinkle worked so well, that I dropped Verizon and got an iPhone 3G the first day it was available and installed 2 applications immediately: Loopt and Twinkle.
Since August, I got 1 friend to join me on Loopt, and in that time I've checked the "Nearby" tab in Twinkle several times a day.
I want to attribute the success of Twinkle's Spatial Social Networking success to the simple fact that there was more than one reason to launch the Twinkle app, and really only one reason to launch Loopt.
I was mostly logging into Twinkle to send Twitter updates to my group of followers, but while I was there I always checked out what was being Tweeted around me.
It has been really interesting to watch how the Nearby features of Twinkle are being used by iPhone users (a gigantic user base at this point). One nearby Twinkler in D.C. explained it to a new user as "a chatroom for people you find all around you."
This was a fundamental different concept than what I had been hearing from Loopt and BrightKite, and it is probably why Twinkle has turned in many cases into a creepy hook-up tool as the popularity surged.
So creepy, that I was tricked into clicking on my first NUDE Twinkle user's uploaded picture last week - it was a dude - not cool.
Twinkle is almost too big now, but I think people have a taste for why sharing your location can be cool and have started to accept that there are benefits to sharing your location.
I think many middle-of-the-road technology users will likely still want to be a little less liberal in their location broadcasting than the users are on Twinkle, so features like limiting your location to trusted friends, and multiple tiers of accuracy settings so people don't know EXACTLY where you are but can at least know what city you're in will be well received.
Paul Ramsy was rather miffed by Google's Latitude launch, as he viewed the beta as an innovation killer.
It's been a while since Google brought out anything truly innovative, but they sure have shown themselves willing to copy the services of upstart companies and try to snatch their markets away
However, I think the move is decidedly less evil than Paul perceives it.
Google has shown a clear interest in a broad spectrum of geospatial technologies, and has provided data to hundreds of millions of users that just 4 years ago never would have had access to it.
It only makes sense that many geospatial technologies will be explored by Google and in almost every case be improvised on and enhanced by the company and, even more importantly, the users.
You can't say Google came to the party late, and is trying to snuff out start-ups, they obviously saw some potential for the basic idea when they bought Dodgeball in 2005.....
I an also tell you that I had 5 of the 7 friends I invited agree to share their location with me in the first day of the service's launch.
Why?
Probably because it works with many cell phone models and carriers, and works well with my friend's GMail accounts and their iGoogle, which they are using all day long.
I think Twinkle showed that it requires more than a single-purpose app to get people to really utilize spatial social networking, and I think Google has realized this.
Tying the feature into a users regular use of things like search, GMail, and other Google Applications looks like a winning formula to spatially enable a gigantic user base.
I can only hope that Google looks at the FireEagle and BrightKite models, which did an excellent job of making the location something that was abstracted from a single application or platform, and something that can tie into many other social network platforms, Flickr, Facebook, Picasa, Blogger, WordPress, even FireEagle, Loopt, and BrightKite.
I think they will.