23 March 2016

Clean up your user's reports in their private folders


If I had a nickel for every time an administrator would ask for access to their user's private report folders, I'd have at least $20.45 in my pocket. Just enough to see a movie and buy a small popcorn as I celebrate the awesomeness that is our operational reports team at Salesforce.

A little known but very cool feature with the Spring '16 release is the allPrivate scope within SOQL and the API. If you're an administrator and not sure how to use the API, check out my API First blog post or just try the publicly available workbench tool which makes it easy for anyone to use the API.

The allPrivate query scope enables an administrator to query all reports and dashboards that reside in a user's private folder. As a result, when a user becomes inactive or private reports aren't run for long periods of time, it's possible for an administrator to remove those reports.

There are two simple use cases that illustrate this feature:
1. reports in private folders that haven't been run in the past year
SELECT Name,FolderName,Owner.Name,Owner.isActive,Id,LastModifiedDate,LastRunDate 
FROM Report 
USING SCOPE allPrivate 
WHERE LastRunDate < LAST_N_DAYS:365

2. reports in an inactive user's private folder
SELECT Name,FolderName,Owner.Name,Owner.isActive,Id,LastModifiedDate,LastRunDate
FROM Report 
USING SCOPE allPrivate 
WHERE Owner.isActive = false

Another reason why this feature is at the top of my list for Spring '16 is that it enables Event Monitoring customers to denormalize the names of reports in private report folders. Now when auditing if a report was exported from an inactive user with a report in their private report folder, it's possible for an administrator to pull out the both the report id and the report name using these SOQL queries whereas before, all they got was a report id.

The Spring '16 release is great time for spring cleaning your user's private reports and for auditing which private reports were exported.

03 February 2016

Quiddity - the 'whatness' of concurrent Apex limits in Event Monitoring

Quiddity.

I had to look that one up. One of our awesome architects here at Salesforce, Peter Wisnovsky, came up with it.

Quiddity is defined by Wikipedia as the essence of an object. Literally its "whatness" or "what it is".

What the?! Why not just call it 'type', 'category', or 'dimension'? I asked Peter and he told me that 'type' was too overloaded. I personally just think it's because 'type' was too boring whereas 'quiddity' is supercalifragilisticexpialidocious!

Regardless it turns out that understanding an Apex class's quiddity is pretty important when determining whether you've hit the synchronous concurrent Apex limit.

From the Apex Developer's Guide,
"The Concurrent Apex Limit is the number of synchronous concurrent requests for long-running requests that last longer than 5 seconds for each organization. If more requests are made while the 10 long-running requests are still running, they’re denied. Of which, each org has a limit of 10."
What this really means is that when the limit is exceeded, users may receive the following error which can seem confusing to an end-user or even an admin:
"Unable to Process Request.  Concurrent requests limit exceeded.
To protect all customers from excessive usage and Denial of Service attacks, we limit the number of long-running requests that are processed at the same time by an organization. Your request has been denied because this limit has been exceeded by your organization. Please try your request again later."
There's a great blog post on the engineering developer blog by the former blimp pilot, John Tan, that goes into more detail about what this limit is and how to work with it.

The quiddity log attribute is key to unlocking an Event Monitoring query to find out which Apex classes have the potential to hit the concurrent Apex limit in your org. Well, that's pretty darn useful, so we added the quiddity attribute to the ApexExecution file type in the Spring '16 release.

There are two reasons why you might run into the concurrent Apex limit:
1. High callout time <--- Probably the most common cause!
2. High database time

If the cause was a high callout time, you can filter your ApexExecution file based on QUIDDITY (E, H, M, R, V, W, X, L, K, or I) and CALLOUT_TIME (>5000), both of which were added in the Spring '16 release.

Example of how you can use Quiddity and Callout Time to identify Apex classes
If the cause was a high db time, you can filter your ApexExecution file based on the QUIDDITY (E, H, M, R, V, W, X, L, K, or I) plus where RUN_TIME>5000.

The values for quiddity in the logs include:

A - Legacy Batch Apex
C - Schedule Apex
E - Inbound Email Service
F - Future
H - Apex REST
I - Invocable Action
K - Quick Action
L - Aura
M - Remote Action
Q - Queuable
R - Synchronous
S - Serial Batch Apex
T - Apex Tests
TS - Synchronous Apex Tests
TA - Asynchronous Apex Tests
V - Visualforce
W - SOAP Webservices
X - Execute Anonymous

This won't exactly tell you where you did hit the limit, but it will at least identify which synchronous Apex classes run longer than five seconds and probably should be optimized using the tips provided John's blog post.

In addition to quiddity, we also added EXEC_TIME to ApexTrigger files in order to get the run time of each trigger execution instead of RUN_TIME which turns out not to store any values in production. Who knew?!

12 January 2016

Logging Salesforce User Activity with Heroku's Logplex

If you can't already tell, I'm huge advocate of logging user activity. There's something incredibly powerful about understanding what people did in the past. I guess it harkens back to my days as a high school history teacher where I would tell everyone on the first day of class that students who fail my class are doomed to repeat it.

In my quest for understanding how salesforce customers want to log activity and measure it, there are some consistent themes I've heard:
1. the use case always drives the granularity of the data we need to capture
2. while real-time isn't always necessary, it's almost always desired
3. I really want one place to go to for log data
4. if I have access to the raw log data, I can always slice-and-dice it the way I want in my reporting app of choice

One interesting solution that I've been playing with is capturing events in Salesforce orgs and sending them over to Heroku. If you haven't heard of Heroku before, you should check it out. It's a platform for developers to effectively deploy and manage their applications. One of the great advantages of Heroku is it's great add-on platform called Elements. Whether it's cache, video processing, data storage, or monitoring, it's easy to plug Heroku apps into a great ecosystem of app providers.

Logging Salesforce user activity is pretty simple:
1. I created a polling app that runs on Heroku. In my case, I created a python script that polls Salesforce every minute to retrieve Setup Audit Trail events. But it could just as easily captured Login History, Data Leakage, Apex Limit Events, or really any object accessible via the Salesforce API.
2. The python script writes Salesforce user events to the Heroku logging system called LogPlex
3. LogPlex is integrated with a series of add-ons including Logentries, Sumologic, and PaperTrail. It can also be integrated into other back end systems like a SIEM tool or notification apps like PagerDuty

The advantages of this solution include:
  • it's near real-time (or as real time as the frequency of the polling app you create)
  • it has the ability to further integrate events from other Heroku apps that you've built
  • Heroku has a great add-on ecosystem that makes it easy to turn these events into insights



The disadvantages of this solution include:
  • Heroku's LogPlex only persists the last 1500 events and can be lossy since it was really intended to be used for logging performance trends rather than security events like escalation of privileges.
  • the polling app will count against API limits. If it polls every minute, it will cost you 1440 API calls per day.
To try this solution, check out the sample Setup Audit Trail python script on my Github Repo.