20 October 2014

Salesforce Application Monitoring with Event Log Files

Have you ever:
  • wondered how to make your sales and support reps more successful?
  • wanted to track the adoption of projects that you roll out on the Salesforce platform like S1, Chatter, or the Clone This User app from Arkus?
  • wanted to find out which apex classes are succeeding and how long it takes for your Visualforce pages to render in production?
  • find out why some reports run slower than others?
  • needed to audit when ex-employees leave the company with your customer list?
Application Monitoring using Event Log Files, new in the Winter '15 release, enables all of these use cases and many, many more using an easy to download, file based API to extract Salesforce app log data.

When we started building this feature, which has been in pilot for over a year, we talked with a lot of customers who wanted access to our server logs for a variety of use cases.

What we heard from many of those customers was that they wanted to easily integrate the log data from all of their organizations with their back-end reporting and audit systems so they could drill down into the day-to-day detail. As a result, you won't find a user interface within setup to access these files; everything is done through the API in order to make integration easy.

The idea behind this feature is simple. Everyday, Salesforce generates a massive amount of app log data on our servers.

Our app logs do not contain customer data but instead contain metadata about the events happening in an organization. As a result, we store a report id rather than a report name or an account id instead of an account name. This obfuscation of the data enables customer's to normalize ids into names for records.

Every night, we ship these logs to a Hadoop server where we map reduce over the log data to create Event Log Files for organizations enabled with the feature.

As a result, every morning, a customer can download the previous day's log events in the form of CSV (comma separated values) files using the API. We chose CSV as a file format because it's easy to use when migrating data between systems.

Once you have this file, you can easily integrate it with a data warehouse analytics tool, build an app on top of a platform like force.com, or buy an ISV (Independent Software Vendor) built app to analyze and work with the data.

To make it easy to try this feature out and build new kinds of apps, we are including one day of data retention for all Developer Edition organizations. That means if you have a Developer Edition organization already, just log into it using the API and you'll have access to the previous day's worth of log data. If you don't already have one, just go to http://developerforce.com/signup to get your free Developer Edition org.

Application monitoring at Salesforce with Event Log Files has just made auditing and tracking user adoption easier than ever before.

Icons by DryIcons

06 October 2014

#Where is the Salesforce Hacker at Dreamforce 2014

This will be my tenth year presenting at the conference. And every year, I look forward to this event!

When I joined in the Summer of 2005, the big news of the conference was:
  • Customizable Forecasting
  • AppExchange
  • Field History Tracking on Cases, Solutions, Contracts, and Assets
Customizable Forecasting is now in it's third iteration and looks better than ever. 

AppExchange has over twenty-five hundred apps that have been installed over two and one half million times times.

And Field History has grown to almost all objects and over one hundred billion rows of audit data.

But what makes Dreamforce truly remarkable is definitely not the features that we highlight, the band that headlines the conference, or the orders of magnitude growth - it's the people that come to the conference. Every year, I talk with as many customers, partners, and vendors as I can. I love Dreamforce for their stories, for their use cases, for their challenges that they bring to the conference in hopes of replacing those challenges with solutions.

In the words of a colleague of mine, this conference is magical.

If you get a chance, stop by some of my sessions listed below and feel free to introduce yourself. I would love to meet you!

Registration Link
New Event Monitoring: Understand Your Salesforce Org Activity Like Never Before
InterContinental San Francisco Grand Ballroom AB
Monday, October 13th 

12:00 PM - 12:40 PM
Learn the Four A's to Admin Success
San Francisco Marriott Marquis, Yerba Buena - Salon 7
Monday, October 13th

3:30 PM - 
4:10 PM
Event Monitoring for Admins
Moscone Center West Admin Theater Zone
Tuesday, October 14th

11:00 AM - 11:20 AM
Project Wave Use Case: Audit Analytics
Moscone Center West 2007
Tuesday, October 14th

4:00 PM - 
4:40 PM
Do-It-Yourself Access Checks with Custom Permissions
Moscone Center West 2009
Tuesday, October 14th 

5:00 PM - 
5:40 PM
Creating Dynamic Visualizations with Event Log Files
Moscone Center West 2006
Wednesday, October 15th 

3:15 PM - 
3:55 PM
Parker Harris's True to the Core: What’s Next for Our Core Products
YBCA - Lam Research Theater
Wednesday, October 15th 

5:00 PM - 
5:40 PM

22 July 2014

DIY salesforce.com monitoring with real-time SMS notifications

Recently, while on a customer on-site, I was asked a simple question - how can we do real-time monitoring of salesforce.com?

These were system administrators and operations people used to monitoring the uptime of their data center. Of course they expected real-time monitoring and automated alerts.

There are many ways to monitor salesforce. And when there isn't standard functionality to monitor, there is always a custom solution.

About a week ago, I started running into some issues with a new service that I was building. I was inspired by a sparkfun blog article I read about an open API based on Phant that allows you to post arbitrary custom values for real-time monitoring. I decided to build my own real-time monitoring system based on a simple heartbeat design that would notify me when my heartbeat skipped a beat. And when it didn't skip a beat, I just wanted to log the success and chart the trend over time for discussion with our engineers. This was similar to the requirements I heard while at the on-site with my customer.

I had some basic requirements for the first iteration of my monitoring service:

  1. it had to be automated to provide real-time data
  2. it had to perform the simplest query to determine availability 
  3. the query mechanism needed to be secure and hosted outside of salesforce 
  4. the charting and notification systems had to be as simple as possible, preferably no passwords or fees for using it in the first iteration. As long as I could obfuscate sensitive data, it could even be publicly exposed data.

My first prototype was done in about half an hour.

  • I created a bash shell script that I hosted on a Linux box under my desk. This was the secure part hosted outside of salesforce.
  • I created a CRON job on my Linux box set to run the shell script every minute. This would consume 1440 API calls a day as a result but I thought I could fine tune the frequency of the script later to suit my needs. Increasing the real time nature increases cost of API calls and vice versa, I can decrease cost by loosening my requirements. This was the automated part of the solution.
  • The shell script data flow was simple: log in using OAuth and curl, query to get a count of an sObject, and parse the result. If the result has a number, consider it a success, otherwise consider it a failure and log the error.
  • I used a free data publishing service from data.sparkfun.com. Originally created for publicly accessible IoT (Internet of Things) apps like weather device data, it made it trivial to expose the data I needed in a simple Rest API. In the next iteration, I would use keen.io which has more functionality and freemium options but involved more design than necessary wiring up my first iteration. You can check out my live heartbeat monitor that I'm still using to monitor my service.
  • I created a google charting API report to visualize the data. This was the visualization part of the solution and entirely based on a phant.io blog posting.
  • I used a freemium SMS service called SendHub to handle the notifications. I originally used Twilio but needed a simpler, freemium option for the first iteration.

Every minute, the CRON job would wake the bash shell script. The script would log into salesforce using the rest API, query a count of my new sobject, and if successful it would log a row to sparkfun which I viewed on their public page. If it failed, I would log another row to sparkfun with the error message. I then sent a SMS notification of the failure to my cell phone. To view a trend of successes and failures over time (which was useful to see what happened when I was away from my phone or asleep), I used my Google charting report.

This DIY project highlights a simple case of real-time monitoring.  If you want to try it out, you can find the code for this project in my Github repository - heartbeatMonitor.