09 December 2016

Creating Custom Event Monitoring Wave Dashboards

Creating Custom Event Monitoring Wave Dashboards

Are you a Salesforce Shield or Event Monitoring customer using the Event Monitoring Wave App? Do you know it's pretty easy and - also very quick - to create custom views to the Event Monitoring data?

Mike Smith, Salesforce App Cloud Solution Engineer based in Denver, Colorado area is an expert with Event Monitoring and in this short video below Mike will demonstrate how quickly you can create a custom dashboard with Event Monitoring Wave App for Logins in the last 7 days.

Enjoy!




07 December 2016

10 Event Monitoring Gifts for the Holidays

Guest post by Arastun "Russ" Efendiyev. Arastun is Lead Solution Engineer for the Salesforce Platform based in Greater Boston Area. He works with many Salesforce Shield and Event Monitoring customers and is one of the leading experts for Salesforce Event Monitoring and Transaction Security. You can connect with Russ on LinkedIn and follow him on Twitter.


10 Event Monitoring Gifts for the Holidays

Holidays are around the corner, and odds are that you’re finding yourself with a shopping list for all the gifts. But don’t forget reward yourself! If you have Event Monitoring, here are Ten Best Practices you can take advantage of - put them on your list!

For those of you unaware of Event Monitoring, here is a quick cheat-sheet to get you up to speed.

Here are the Ten Best Practices.

  1. Sit Down and Define what you want to track. Have a Monday morning meeting about it. Take the Event Monitoring reference API doc that provides a list of all things you can track for each event. Ask yourself “What would the ideal dashboard look like?” Use the API doc to help you understand all the data points you can track. Maybe some of them exist outside of Event Logs (i.e., metadata/config changes) – that’s okay, throw them onto the dashboard. Too often I see folks zero in on the technology and encounter a ‘technology-first, business-second’ dilemma. Reverse it! Just bring some good donuts to the meeting.

  1. Offload. The logs are retained in Salesforce for 30 days. Then they leave. Set up a process to export them on an archive of your choice. A large use-case of Event Monitoring is doing a retroactive forensic analysis on an individual or individuals that left the company. Let’s say the individual took down top Contacts in a report export and went to a competitor. They did it 4 months ago. You have to be prepared to be able to mine that data. Set up automation – such as using shell scripts provided here and running a scheduled cron job to grab data, or scheduled to run this python script provided here. A SIEM tool can help with this and eventually have data reside after it leaves Salesforce in 30 days. This is especially helpful as various SIEM tools excel at log-aggregation and ingestion, which comes from Event Monitoring files. Contact us and we can provide some SIEM vendors that have hot-pluggable adapters for Event Monitoring, if you want an out-of-the-box experience.

  1. Compliance vs Productivity. It’s definitely something we all live and breathe, especially in Financial Services, where I see a lot of companies try to walk this very fine and very important line. Use a security model that’s too strict and your end users don’t use the system, while security that’s too lax comes with more threats. Let’s use the same example. So you’re still worried about someone leaving for a competitor and taking all your reports? Okay, remove their privilege to Export Reports and their decommission Connected App Data Loader access. Problem solved, right? Whoops. Too bad you just prevented the user from working on various things they’d need to do in Excel with the exported data. No worries here – Transaction Security, a feature of Event Monitoring that can alert/block on events in real time, can help. It lets you provide a granular scope on certain types of records and how their export will happen. Maybe we only block anyone who exports an entire contact list, or over 100 contacts. Or, even more lax, maybe we let them do it but email an alert saying they’ve done it. Or, only email or block if they did it in off hours for over 100 contacts. Or email if they did it in off hours, their profile was Standard User, and their User object had a Suspicious__c checkbox boolean flag turned on. You can get really flexible here and get much closer to the business, while letting the end users be productive

  1. Keep up with our roadmap! We release 3X per year. Obligatory Forward-Facing statement / #Safeharbor here. We have product managers mapped to Event Monitoring functionality, the Transaction Security aspect of it, and even our Wave Admin Analytics for Event Monitoring visualization. Take a look at a recent Winter ’17 example of some things we put in Transaction Security. To broaden picture: we executed on Transaction Security and Wave Admin Analytics – and those are undergoing iterations. Part of it is feedback from YOU! What are the use-cases that are important to you? Continue to share wiith us and we’re open to putting them in our roadmap.

  1. Data can come from elsewhere. Event Monitoring is the end-all-be-all. What about your authentication history? Your metadata/configuration changes? Your data & field history changes? Easy to overlook. Don’t forget them for your next audit. As per its name Event Monitoring provides insights on events, while data/metadata changes are a great complement. Wear your fancy audit shoes, seamlessly crunch out reports on all of these things, and give them to the auditors.

  1. Join your friend objects. Okay, so you have your logs. Let’s take a look at that example of ours – a user extracting a report. They collect information on User ID of 00530000009M943. Good start, but we need to make sense out of it. Well, we have User Object. Plug that into your reporting solution on Event Monitoring, whether it’s a SIEM or our Wave App, for 30 days worth of retention. If I’m looking at a report, Report ID is captured, which can be joined against the Report table to give it a name (versus an ID). We now find out that it was John Smith that looked at a Top Contacts report. And if I want to get a Profile mapping to it, I know that Profile is tied to User via User’s ProfileId. The Wave Admin Analytics is effective here since we’ve brought over some of these friendly objects!

  1. QA Buddy. Not everyone has a QA team and/or QA automation, like Selenium, within their enterprise. When you test your Authorization model, Login-As is a great feature (which is trackable on Event Monitoring, by the way). It lets you impersonate an end-user so you can validate that they can or can’t see functionality, especially data. Who can or can’t see data, create it, edit/update it, etc. Login-As is a QA process, whether it’s you doing it or another individual checking your work. Here’s where your thoughtful effort on Profiles, Permission Sets, Criteria Based Sharing Rules, Apex Managed Sharing Rules, Org Wide Defaults, Role Hierarchy, etc. is put to the test. Those are very important controls – we don’t want the wrong individual within the org to see wrong data, let alone act on it. With event monitoring, you have a true validation of the controls you put in place. You can assert that John Smith never saw a Case filed by Bob Jones for Payroll to validate Bob’s paycheck amount reflects the hours worked. Or, you finally saw that John was able to look at John’s payroll case. So you know your controls need to be changed. This is true automation to validate your controls.

  1. Insights into Usage. As per above example of John looking at Cases, you may find this insight beneficial beyond security validation. Let’s say that you want to see how much your Sales team, that John Smith is part of, is looking at customer-only Cases – you’re trying to understand if they are able to help the customer and/or even find new product line opportunities. Event monitoring can help track both use cases, being your QA Buddy for the authorization model and seeing how much functionality is being utilized. A Sales individual can look up a Case and give the customer a call. If they don’t have Salesforce integrated to Telephony, there may not be anything tied to that record, but the log will remain that John Smith went on a customer-field case eight times in two days!

  1. Trailhead! We have released various Trailhead modules to get hands-on with this and we run a checker to see if you got your work right or wrong in your Developer org, an account where you can test this. Here are some Trailhead modules: Transaction Security Trailhead & Event Monitoring Trailhead. There are many other modules that have Security – feel free to search for them in Trailhead depending on your security area of interest.

  1. Optimize where it matters. So you saw your complex Visualforce page, that utilizes Angular and is backed by sophisticated Apex Controller, encounter a slower render/request time. You spent 14 hours fixing it. Success! Actually, the behavior witnessed was close enough to average request time. Remember, this was based off your perception of a slower request time. The kicker is that there are two other Visualforce pages that are behaving a good standard deviation out of the ordinary. Perhaps we should’ve chased after those! This is where Event Monitoring can go beyond security – it can help you justify which change requests/projects to allocate your time to. The next kicker is: You encountered slowness on your fancy Visualforce page that was behind or close to average request time… but did you know that you and your peer administrator/developer were the only ones using and testing it? Event Monitoring can tell you how much of your functionality is being used. So perhaps it’s those other two Visualforce page that have been utilized by a lot of users in the past few days that need tweaking!


Happy holidays. Leave your comments below or tweet and discuss with the #salesforcehacker hashtag! Many thanks also for Mike Jacobsen for his contribution on this blog.

01 December 2016

Two New Keys To Unlock Your Salesforce Users Event Data



Two New Keys To Unlock Your Users Event Data
Have you been exploring the new release with Event Monitoring? If so, you might have seen the Event Monitoring event log lines that contain Login_Key and Session_Key columns. These are new fields that tie together all the different events in a Salesforce user’s or admin’s login session or activity session, respectively.

Introducing Login Key and Session Key


Purpose of the Login Key and Session Key fields are to help provide specific identifier for a user’s login session across various log lines to give customers a better 360 degree view of users behavior within the Salesforce application for a given security investigation, understanding and exploring specific user behavior or when researching a specific application or performance issue.

Let’s see them in action. Here's an example showing URI event logs - in other words, users’ click path in the Salesforce application across the various generated log lines. To easily see a more concise view of what each user is doing, you can now use LOGIN_KEY as an identifier across the different Events to tie them together as well separate different actions together with this powerful identifier. Please see from the picture below and example of the LOGIN_KEY field within URI event logs.

Login Key and Session Key Examples


So how can you best take use of this identifier? I’ve collected couple of examples here, please leave your thoughts and additional ideas to the comments below.

Your application can generate a ton of URI log lines. When researching for specific user’s log lines, you might easily run into issues of finding the needle in the haystack. You can use LOGIN_KEY as grouping mechanism to separate different user sessions and volume of activity.

Example 1: Splitting User Activity Forensics by Different User Sessions  


Looking at URI (i.e. page views) for example in this picture below, we’ve aggregated all URI Logs for user Jari Salomaa on September 23rd. We can see there’s 5 different LOGIN_KEY’s that separate the different sessions ranging from logins from Salesforce1 Mobile, Safari, Chrome browsers from which, there’s over 200 log entries for one specific Login session that we can click and expand and investigate more closely what specific pages those URI logs contain.

Screen Shot 2016-10-19 at 2.28.03 PM.png


Additionally for the security conscious customers, whether on Sales Cloud or Service Cloud or other Salesforce products, understanding data export activity is always important. Who is downloading customer data to their local computers and especially if that happens in very large volume.

As an example building real time alerts and policies is important when there are large volume data export activity taking place from different hours of the day outside the typical business hours. This is often the case with compromised credentials and different hacker groups placed in different countries like Russia, China and Eastern Europe targeting valuable data. If you don’t have business users logging in and exporting data in these regions you can use LOGIN_KEY and SESSION_KEY to better understand past behavior against different timezones your business operates.

Example 2: Monitoring the number of report exports with SESSION_KEY Salesforce Customers can obtain better visibility to their application’s report export behavior by grouping the ReportExport log line dataset grouped by the hour of the day


How to identify non business hours data export activity and build alerts

  1. Use Event Monitoring Wave App or any of your preferred data visualization tools or Event Log File Browser if you have small volume of logs) to download ReportExport Log Lines
  2. Group your ReportExport log lines by SESSION_KEY
  3. Sort the logs by hour of the day
  4. Identify non business hour ReportExport events based on your business hours
  5. Build APEX policy with Transaction Security to alert on a specific threshold e.g. for Account, Opportunity, Lead, Case, Contact etc entity download object by specific timeframe

Screen Shot 2016-12-01 at 3.15.59 PM.png

Example 3: Using LOGIN_KEY and SESSION_KEY as identifier across various support 25 log lines


  • Use it as ID to construct a complete view for forensic investigation to user activity, for example to understand what the user did, which pages the user visited given a specific login session and pull all of that information together in it's own table
  • Separate different user sessions within a specific login session within user's credentials, for example when user may have been logged in from API clients, user interface and other places and when it’s hard to understand which session contains unwanted or suspicious behavior
  • Parse together otherwise complicated session keys to more holistic view

Event Logs That Support Login and Session Key


1. Apex Callout - details about callouts (external requests) during Apex code execution
2. Apex Execution - details about Apex classes that are used
3. Apex SOAP - details about Web Services API calls
4. Apex Trigger - contains details about triggers that fire in an organization
5. API - contain details about your organization’s Force.com Web Services API activity
6. Asynchronous Report Run - created for scheduled report requests that includes dashboard refreshes, asynchronous reports, scheduled reports and analytics snapshots
7. Bulk API - contains details about Bulk API requests
8. Change Set Operation - contains information from change set migrations
9. Console - contains information about the performance and use of Salesforce console whenever opened with a sidebar component
10. Dashboard - contains details about dashboards that users view
11. Login - your organization’s user login history
12. Metadata API Operation - contains details of Metadata API retrieval and deployment requests
13. Multiblock Report - contains details about Joined Report reports
14. Package Install - contains details about package installation in the organization
15. Queued Execution - details about queued executions, for example Batch Apex
16. Report - contains information about what happened when user ran a report
17. Report Export - contains details about reports that a user exported
18. REST API - contains details about REST specific requests
19. Sites - contains details of site.com browser UI or API requests
20. Transaction Security - contains details about policy execution
21. URI - contains details about user interaction with the web browser based UI
22. Visualforce Request - contains details of browser UI or API based Visualforce requests
23. Wave Change - represents route or page changes made in the Salesforce Wave Analytics user interface
24. Wave Interaction - tracks user interactions with the Wave Analytics user interface
25. Wave Performance - help you track trends in your Wave Analytics performance

For more details about supported events, see the SOAP API Guide for additional updates and details, which is updated each release. Thanks for Melissa Kulm, Mike Jacobsen and Lakshmisha Bhat for their invaluable feedback and comments on this blog.

Please feel free to leave feedback below!



02 October 2016

Event Monitoring at Dreamforce 16

Getting ready for Dreamforce? 

Mark your calendars and come join the session about Event Monitoring and Field Audit Trail on Thursday 6th October at 3.30pm - 4.15pm at Moscone West.

We'll have also Yousuf Khan, VP of IT from PureStorage to present their case for Event Monitoring project for their Salesforce application.

We'll provide latest roadmap details and insights how to get the most out of your Salesforce application for security and compliance monitoring, application development and performance monitoring as well as user behavior and adoption monitoring.

We'll highlight also some of the exciting ISV vendor solutions built on top of Event Monitoring APIs to help you analyze, optimize and grow your application securely.



Remember also to check out the latest Salesforce Shield and Event Monitoring demos at the Salesforce Expo Campground during the conference. We'll have staff to answer any questions related to using the analytics APIs for logs, Login Forensics and Transaction Security policies for customers and partners.

Details about the Event Monitoring session available here.

You might be additionally interested to check our Platform Encryption - Bring Your Own Key session.

See you in Dreamforce! Hope you have a great time!

Cheers, Jari

07 July 2016

Get Your Event Monitoring Wave App

Hey there! Salesforce Shield and Event Monitoring expands from Event Log API to built in, out of the box data visualization with Event Monitoring Wave App. Now Generally Available (GA)! Big thanks to Adam for all the heavy lifting with the Admin Analytics pilot (former name). 



If you missed the announcement from June, here's the deal what you need to know.
  • Event Monitoring supports 32 different Salesforce event types and it can be quite a job to integrate the data flow and figure out which events to subscribe and visualize and build custom dashboards
  • Event Monitoring customers and partners have now access to Event Monitoring Wave App with 15 built-in dashboards for the core use cases 1. Security, 2. Application Development and Performance monitoring and 3. Salesforce Use and Adoption
  • Event Monitoring Wave App includes API integration with Event Log Files API providing immediate value out of the box by simply turning Event Monitoring on for your app but also a great point and click interface to slice and dice the data your way with ways to customize dashboards your own way
  • The Event Monitoring Wave App is licensed for 10 users and 50 million record row limit and there's configuration wizard to select which datasets to include and for how long (default is 7 days) depending on your app's data volume


Security and compliance are a very strong drivers for Event Monitoring customers and we have spent the most of our time building different views for security and compliance related dashboards. Hope you enjoy them, here's a quick walkthrough of each:
  • My Trust: inspired by trust.salesforce.com, My Trust is a single place to view the health of your Salesforce app, active users, total transactions, average and max page time and end user page time. Drill down to different event types and compare daily trends.
  • Report Downloads: see the percentage of viewed reports that resulted in exports, as well as report export trends by different user agents and IPs that can be filtered down by inactive users to indicate suspicious or large volume of data export activity
  • REST API: analyze who is using the API for example with Data Loader to manage or move large data sets and identify possible hot spots for REST API that are used by managed packages 
  • Login As: understand admin behavior logging in as end users and identify possible abuse, where they logged in and who are they and what pages were accessed
  • User Logins: see login trends per user, who is using the application the most, identify IP addressed with shared logins for signs of suspicious use as well as understand what browsers are being used and average times being logged in
  • Setup Audit Trail: identify what admins are doing in the setup and keep track on most common audit changes and their types
  • Files: get visibility which files are being downloaded by different roles, period of time to help identify the top files or resources that are barely being used

Application Development and Performance is also a very important topic to continuously monitor and stay in the know of the application health and understand if some reports are taking long to produce or if certain Apex jobs should be timed differently to avoid hitting governance limits. Here's what we've built for Salesforce developers:


  • Apex Execution: help to prioritize which Apex classes to fix to improve overall performance by comparing overall Apex performance, CPU time, SOQL and DML interactions based on total DB time
  • Reports: see report usage trends accross users and profiles and identify top reports and get visibility into most used reports as well as their performance to load
  • API: see API trends per Object and the overall API performance during certain period of time including average CPU time per API
  • Dashboards: get visibility into Dashboard usage trends over time and the performance of these dashboards so you can prioritize in troubleshooting


Last but definitely not least, understanding Adoption and User Engagement for the Salesforce application is key. What are my users doing, how are they accessing the application, when and what are the top resources or click paths. These are valuable for the line of business, executives, IT teams as well as developers alike:

  • Lightning SFX: provides visibility who are the users using the new Lightning User Interface and how it's performing, see how many total user interactions took place and what the average and max end user page time (EPT) looks like
  • Page Views (URI): see what pages the users are clicking the most and how much time they are spending, on average, on these pages. Drill down to additional details for users details and he/she is accessing or drill down to actual pages who are the users using them
  • Visualforce Requests: see the most used Visualforce pages and prioritize troubleshooting based on performance e.g. sorting by runtime you can quickly see the slowest pages, or AppExchange adoption
  • Wave Adoption: last but not least, you have pushed out Event Monitoring Wave App or Sales or Service Wave, and you want to know are your users actually using it, identify details at user level and how many interactions they have with Wave dashboards and which ones they are customizing
We hope you enjoy the app and will find these built in visualizations useful. You can use your 10 permission set licenses as viewers or editors/managers. If you require more users or are nearing the 50 million limit you can get in touch with your Account Executive to get more with Wave Platform.

If you are an existing Event Monitoring customer and haven't yet tried out the Event Monitoring Wave App: please follow these instructions to get set up. If you're new a customer interested to learn more about Event Monitoring and the Event Monitoring Wave App, get in touch with you Salesforce Account Executive to get started. 

For anything else, please leave questions or comments here or reach out on Twitter to @salomaa. Thanks and sunny summertime from San Francisco!


-Jari


08 June 2016

New in Summer '16 with Event Monitoring and Transaction Security

Event Monitoring with Transaction Security has expanded the support for policy options from API based report export events to cover also UI triggered report exports. You may have seen the updated Apex class for Data Export Policy for Leads? This means any type of report export for specific resources like Leads or Accounts or Opportunities can be set to a specific trigger to apply the appropriate security condition according your security policies. In the stock policy we have defined the condition for more than 2,000 records or more than 1000 ms which would indicate a large data download. You're free to customize the resource and condition in your own policy.

global class DataLoaderLeadExportCondition implements TxnSecurity.PolicyCondition {
    public boolean evaluate(TxnSecurity.Event e){
        // The event data is a Map<String, String>.
        // We need to call the valueOf() method on appropriate data types to use them in our logic.
        Integer numberOfRecords = Integer.valueOf(e.data.get('NumberOfRecords'));
        Long executionTimeMillis = Long.valueOf(e.data.get('ExecutionTime'));
        String entityName = e.data.get('EntityName');

        // Trigger the policy only if and export on leads, where we are downloading more than 2000
        // records or it took more than 1 second (1000ms).
        if('Lead'.equals(entityName)){
            if(numberOfRecords > 2000 || executionTimeMillis > 1000){
                return true;
            }
        }

        // For everything else don't trigger the policy.
        return false;
    }
}

This helps security teams to stay in the know with realtime actions for large data export events or shield from unwanted data loss.

You can use various triggers, such as time, geolocation, IP, profile etc to customize the report export criteria. Simply choose the Data Export event from the dropdown menu and select Account, Case, Contact, Lead or Opportunity from from the resource name and apply your wanted action, in-app notification, email notification, two factor authentication or block.

Please see the following short demo for applying policy condition on report export with Accounts.




01 June 2016

Login Forensics: Login History plus for auditing user logins

The Salesforce App Cloud platform has important auditing capabilities built in to ensure that you can focus on what's most important: your business. One of these foundational audit tools is Login History.  The Login History audit trail enables administrators to download the last six months of logins to the Force.com platform, either via a CSV download link in the setup user interface or via the API. With Login History, you can track login successes and failures by user, IP, application, API, or browser to name a few key attributes. In addition, Event Monitoring provides access to the Login log lines as well. As you can tell, we consider Login an important event to keep track of!

We're proud to announce the general availability of a premium add-on service on top of our Event Monitoring product line that goes beyond both the Login log line as well as Login History by tracking login information for ten years!

Here's a breakdown of how the three compare:

Login History Login Forensics Login Log Line
Data Duration until Deleted 6 months 10 years 30 days
Storage Oracle Hbase Oracle
Access Setup UI, API API only API Download only
Permissions Manage Users View Login Forensics Events View Event Log Files
Extensibility No Yes, via Additional Information No
Packaging Included with every org Included with Event Monitoring add-on Included with Event Monitoring add-on
Name of sObject or File LoginHistory LoginEvent Login Event Type

How is it possible to store this critical data for so long? Salesforce recently adopted an open-source NoSQL database called HBase. HBase is the same database that we use to store up to 10 years of Field Audit Trail data.



Who cares? Well, I do. As does anyone who wants to maintain an audit trail of login information either for regulatory reasons or to track down anomalous login activity. For instance, imagine that a user always logs in from the same IP address, or during the same login hours, or using the same Chrome browser on Windows. Well, wouldn't it be strange if all of a sudden those behaviors changed over the course of a day, a week, a month, a year, or even a decade? 

All of this is possible with SOQL because of the HBase rowkeys we’ve defined. An HBase rowkey defines how we index these objects for fast queries. Imagine if you had to query a billion rows of LoginEvent records from the past decade in less than 120 seconds! That’s fast and furious query performance. 

The LoginEvent object, which stores the raw login data, has a rowkey consisting only of EventTime (in a descending sort) and the unique record id. And the PlatformEventMetric object, which stores the hourly roll-up metrics, has rowkey consisting of EventType and then EventTime (in a descending sort). These simple rowkeys enable fast response using standard SOQL. You just need to know the time frame you want to query and in the case of metrics, which metric and for which time frame.

SELECT Application, Browser, EventDate, Id, LoginUrl, UserId
FROM LoginEvent
WHERE EventDate>Yesterday
LIMIT 10

This works because EventDate is the first field in the rowkey and the sort works because of the way we store the rows in descending sort order. This is powerful for querying the last ten Login Events that happened in near real-time. 

It’s also powerful for integrating. You can create a polling app that queries every minute in the case of the raw events, and every hour in the case of the metrics, in order to easily integrate the last set of login data since the last query.

Alternatively, you can use the Asynchronous SOQL solution outlined in my previous blog post: Using Asynchronous SOQL with Event Monitoring.

Events are captured in near real-time. What does that mean? Well, there can be a minor delay from when the event occurred and when you can query it. If you want, you can self-monitor the near real-time nature of our events. If you take the average difference between the EventDate and the CreatedDate fields, you’ll see how near real-time your events have been captured.

Near Real Time Example

There's even the ability to introduce your own metadata into the login flow to further fingerprint user’s login profile and identify anomalies in the login process. We call it Additional Info. It's the ability to introduce your own data through a HTTP Header. This can be done via the browser, a proxy, or API authentication. For instance, you might want to register header name (e.g. "x-sfdc-addinfo-correlationid") and value (e.g. "d18c5a3f-4fba-47bd-bbf8-6bb9a1786624"). Then when you look at your login events, you just need to look for any logins that do not have this identifier to investigate further.



Finally, there's a transaction dye that's important to the process. Every Login Event can be traced back to a single Login History Id. This is useful for a couple of reasons. The first one is that Login History connects to Login Geo which captures geographical information like latitude and longitude of your users. As a result, you can use the composite API to orchestrate un-related queries in order to generate the location of every user onto a mapping service like Google Maps. Secondly, with each subsequent activity where the user interacts with data like looking at accounts, you'll be able to track each interaction back to a single Login on both the Login Event and Login History objects. For example, when tracking down which records were viewed from an API query (Data Leakage blog post where this is explained). And after six months, when Login History is deleted, you'll continue to be able to track every interaction back to a single login for nine and one half years more. So even if you login via your iPhone, your Nexus tablet, your Chrome browser on your Mac, and Salesforce for Outlook, we'll be able to separate each set of transactions and link them back to a single login for the next ten years.



All of the screen shots in this post can be recreated using the sample code found in my Github repo.

Login Forensics ushers in a new age of storing near real-time system generated user activity on the Salesforce platform.