Subscribe

Appirio RSS Feed
Subscribe with Bloglines

Add to Google
Subscribe in NewsGator Online

Community

Appirio Technology Blog

Tuesday, October 21, 2008

Using Client-Side Looping to Work within Salesforce.com Governor Limits

Chris Bruzzi

Repeat after me. The governor is our friend. It stops us from doing things we really shouldn't be doing, so in a way the governor makes us a better person. At least as far as SaaS development goes.

As you may already be too familar, Salesforce.com imposes limitations to ensure that their customers do not monopolize resources since they share a multi-tenant environment. These limits are called governors and are detailed in the Understanding Execution Governors and Limits section of the Apex Language Reference. If a script exceeds one of these limits, the associated governor issues a runtime exception and code execution is halted.
I am about to guide you through a simple example of using client-side looping in VisualForce to execute server-side Apex code that would otherwise have been unacceptable based on the governor limits.
Modifying your Apex
There are a number of situations when a solution like this might be helpful, but consider this situation; you want to move 10 million records from Source_Object__c to Target_Object__c via Apex. You would hit the governor limits on number of records retrieved via SOQL and the number of records processed via DML, just to name just a few.
Assuming there isn't already an autonumber field on Source_Object__c that could help us keep track of our progress processing the records, we'll first need to add a checkbox field to Source_Object__c called Processed__c.

We can then use that field in our SOQL query to ignore records already processed, and likewise set it to true as we process records. You would then need to modify your method with a few lines of code similar to what is in red below.


global class BatchProcessDemo {
webservice static void processItems() {
Integer queryLimit = (Limits.getLimitQueryRows() - Limits.getQueryRows()) / 2;
for (List<Source_Object__c> sourceItemList :[select Id, Color, Weight
from Source_Object__c
where Processed__c = false
limit :queryLimit ]) {
List<Target_Object__c> itemsToInsert = new List<Target_Object__c>();
for (Source_Object__c sourceItem : sourceItemList) {
sourceItem.Processed__c = true;
Target_Object__c targetItem = new Target_Object__c();
targetItem.Color__c = sourceItem.Color__c;
targetItem.Weight__c = sourceItem.Weight__c;
targetItem.Source_Object__c = sourceItem.Id;
if (Limits.getDMLRows() + itemsToInsert.size() + 1 >= Limits.getLimitDMLRows()) {
insert itemsToInsert;
}
itemsToInsert.add(targetItem);
}
update sourceItemList;
insert itemsToInsert;
}
}
}


Creating the Visualforce Page
As mentioned in a previous post by Frank and Kyle, make sure you have Development Mode enabled and then redirect your browser to http://server.salesforce.com/apex/BatchDemo to create your page. Click on Page Editor in the bottom left of the browser to open the Visualforce Editor. Add the following code between the <apex:page> tags to setup our form:

<apex:sectionHeader title="Demo"/>
<apex:form>
<apex:pageBlock title="Perform Batch Process">
<apex:panelGrid columns="2" id="theGrid">
<apex:outputLabel value="Max. # of Iterations"/>
<input type="text" value="1" name="iterations" id="iterations"/>
</apex:panelGrid>
</apex:pageBlock>
</apex:form>

You'll notice we use standard HTML input fields rather than Apex input fields since there is no VisualForce controller required. The fields will only be used on the client side via Javascript to batch our calls to Apex.
Add a <div> tag immediately after the </apex:panelGrid> tag to display progress during the batch processing.

<div id="progress" style="color: red"/>

After the <div> tag, add a button to allow us to kick off the processing.

<apex:pageBlockButtons >
<input type="button" onclick="batchProcess()" value="Start" class="btn"/>
</apex:pageBlockButtons>

Next, we'll need to define the batchProcess() method by adding the following code after the first <apex:page> tag.

<script language="javascript">
function batchProcess(){
var iterations = document.getElementById("iterations").value;
var progress = document.getElementById("progress");
progress.innerHTML = "Processing iteration 1 of " + iterations + " iterations.";
sforce.connection.sessionId = "{!$Api.Session_ID}"; //to avoid session timeout
for (i=1; i <= iterations; i++) {
progress.innerHTML = "Processing iteration " + i + " of " + iterations + " iterations.";
sforce.apex.execute("BatchProcessDemo","processItems ",{});
}
progress.innerHTML = "Completed processing " + iterations + " iterations!";
}
</script>

Click Save. Now you can click the Start button on your VisualForce page to perform the job in batches.

Thursday, October 16, 2008

Google Apps Auth Backend for Django

Tim Garthwaite

Google loves Python. In fact, Google's original web spider, which crawls the web to create its search index was written while Larry Page and Sergey Brin (the founders) were still graduate students at Stanford, and rumors abound that it went live written completely in Python. I learned in university that most of the Python code performed well enough that much of the code was still Python to that day (circa 2000), although much of it was highly optimized in platform-specific C. Moreover, Google's new Platform-as-a-Service (PaaS), AppEngine, which allows anyone in the world to host complete web applications "in the cloud" for free (heavy use will be charged at far below-market rates), currently supports only one language (you guessed it: Python). While Google has assured that they will release AppEngine SDKs for other languages, only Python is currently supported.

AppEngine, it can be argued, may not be ready for prime-time commercial or enterprise use, as it does not support SSL for all communication between the browser and servers. Authentication can be done safely by redirecting to a secure login page and returning with a token, but the token (and all your corporate data) would then be passed back and forth in plaintext from then on. Google has promised to add SSL support to AppEngine, but until they do, Appirio's Google Practice has begun recommending the full Django platform (on Apache or, heavens forbid, IIS) for internally developed applications, in anticipation that converting these web applications to AppEngine would be relatively painless.

The AppEngine Python SDK comes with much of the Django framework pre-installed, including its fantastic templating system. Also, the Object-Relational Mapping (ORM) system built into AppEngine is remarkably similar to the ORM that comes with Django, and the AppEngine authentication system is markably similar to its Django equivalent as well. These facts should make conversion from custom in-house Django applications to AppEngine in the future (and throwing out your pesky web servers, gaining the best features of the world's most robustly distributed compute in the world, in the process) relatively painless.

So let's say you wish to go ahead with creating Python/Django web applications in-house. Django comes with an authentication framework that allows for custom back-ends, meaning that you can test username/password combinations against an arbitrary back-end system, such as Active Directory or any other LDAP system, or even against users stored in a custom database. For one of Appirio's clients who is fully embracing the cloud, including Google Mail, Calendar, and Docs corporate-wide, it made the most sense for a certain application to authenticate against Google Apps itself using Google's Apps Provisioning API. Here's how I accomplished this.

First, you must create the back-end Python class. For example purposes, I have created a 'mymodule' directory (anywhere in my Python path) containing an empty __init__.py file (telling Python to treat this directory as a module) and the file django_backend.py. Of course, you must replace "mydomain.com" with your own domain, and as your Python code base grows, you should adhere to a more logical standard for where you place your libraries. It would make sense to think about this and begin now so you won't have to refactor your code. In my system, the class file is in the 'appirio.google' module. Here are the contents of this file:

from django.contrib.auth.models include User, check_password
from gdata.apps.service include AppsService
from gdata.docs.service include DocsService
DOMAIN = 'mydomain.com'
ADMIN_USERNAME = 'admin_user'
ADMIN_PASSWORD = 'p@s$w3rd'
class GoogleAppsBackend:
""" Authenticate against Google Apps """
def authenticate(self, username=None, password=None):
user = None
email = '%s@%s' % (username, DOMAIN)
admin_email = '%s@%s' % (ADMIN_USERNAME, DOMAIN)
try:
# Check user's password
gdocs = gdata.docs.service.DocsService()
gdocs.email = email
gdocs.password = password
gdocs.ProgrammaticLogin()
# Get the user object
gapps = AppsService(domain=DOMAIN)
gapps.ClientLogin(username=admin_email,
password=admin_password,
account_type='HOSTED', service='apps')
guser = gapps.RetreiveUser(username)
user = User.objects.get_or_create(username=username)
user.email = email
user.last_name = guser.name.family_name
user.first_name = guser.name.given_name
user.is_active = not guser.login.suspended == 'true'
user.is_superuser = guser.login.admin == 'true'
user.is_staff = user.is_superuser
user.save()
except:
pass

return user

def get_user(self, user_id):

user = None

try:

user = User.objects.get(pk=user_id)

except:

pass

return user

Let's briefly review this code. authenticate() uses the GData Python library to ensure the username and password match with the actual Google Apps account. Since you need an administrator account to use the Provisioning API, I chose an arbitrary user-accessible API (Google Docs) to verify the user's password. If the password doesn't match, an exception is thrown, None is returned, and the login fails. If it does match, we log in to the Provisioning API with admin credentials to get the Google Apps user object, guser. Then, using a built-in helper method, we attempt to get the Django User object with matching username, or create a new one. Either way, we take the opportunity to update the User object with data from Apps. get_user() is a required function (as we are creating a class to meet a "duck-type" interface, rather than inheritance). We simply return a Django User, if one exists, or None.

Finally, to enable this back-end, you must modify the site's settings.py file, ensuring 'django.contrib.auth' is included in INSTALLED_APPS, and adding 'mymodule.django_backend.GoogleAppsBackend' to AUTHENTICATION_BACKENDS. You can now test logging into your site as Google Apps users. If you have enabled 'django.contrib.admin', you can then login to your site's admin console and see that these users were automatically added into your Django auth system. You could also easily create a web page to list these users by passing 'users': User.objects.all() into a template and writing template code such as:

<ul>{%foreach user in users%}<li>{{user.email}}</li>{%endfor%}</ul>

We hope you find this code useful. Feel free to use any or all of it in your own Django web applications. If you do, please let us know in the comments!

Wednesday, October 8, 2008

Calendar Resource Management with the Google Data API

Matt Pruden

In many enterprises, there is no piece of real estate more scarce than an unoccupied conference room. With so much importance placed on conference rooms, their rigorous management is critical to a successful Google Apps deployment.

While Google Calendar offers a flexible system for reserving conference rooms, projectors, scooters, or any other shared resource, it does not provide a documented API for creating, updating, and deleting resources. Instead, you must manually manage resources through the Google Apps control panel. Manual management may work for a small number of resources but becomes unscalable when managing thousands.

However, creative developers can find just such a Google Data (GData) API for provisioning resources. In this post, we'll explore how to create, read, update, and delete calendar resources using GData through cURL, the commonly available command line HTTP client.

Discovering Calendar Resource support in GData.


Each type of entry in Google, whether a spreadsheet row, user account, or nickname has a collection URL. In true REST fashion, a GET request to the collection URL will return a list of entries. For example, an GET request to http://www.google.com/calendar/feeds/default/private/full will return a feed of calendar event entries. Likewise, a POST to this URL will add a new event entry to a calendar. So, to retrieve and create resources, we first need to discover the collection URL for calendar resources.

A calendar resource has many of the same characteristics as a user. For example, a calendar resource can be a meeting attendee and can be browsed by clicking "check guest and resource availability" in the Calendar user interface. Also, a calendar resource isn't tied to a particular user when it is created. It is reasonable to believe that managing calendar resources through the API might closely mimic managing users through the provisioning API.

In the provisioning API, the collection URL for user accounts looks like this: https://apps-apis.google.com/a/feeds/domain/user/2.0. What if we change user to resource resulting in a URL like this: https://apps-apis.google.com/a/feeds/domain/resource/2.0? The example below uses the cURL application to send a GET request to the new URL. For details on using cURL with GData, see Google's documentation.

curl -s -k --header "Authorization: GoogleLogin auth=DQAAAH4AA" https://apps-apis.google.com/a/feeds/mydomain.com/resource/2.0 | tidy -xml -indent -quiet
<?xml version="1.0" encoding="utf-8"?> <feed xmlns="http://www.w3.org/2005/Atom" xmlns:openSearch="http://a9.com/-/spec/opensearchrss/1.0/" xmlns:gCal="http://schemas.google.com/gCal/2005" xmlns:apps="http://schemas.google.com/apps/2006" xmlns:gd="http://schemas.google.com/g/2005"> <id>https://apps-apis.google.com/a/feeds/mydomain.com/resource/2.0</id> <updated>1970-01-01T00:00:00.000Z</updated> <category scheme="http://schemas.google.com/g/2005#kind" term="http://schemas.google.com/apps/2006#resource"/> <link rel="http://schemas.google.com/g/2005#feed" type="application/atom+xml" href="https://apps-apis.google.com/a/feeds/mydomain.com/resource/2.0"/> <link rel="http://schemas.google.com/g/2005#post" type="application/atom+xml" href="https://apps-apis.google.com/a/feeds/mydomain.com/resource/2.0"/> <link rel="self" type="application/atom+xml" href="https://apps-apis.google.com/a/feeds/mydomain.com/resource/2.0"/> <openSearch:startIndex>1</openSearch:startIndex> <entry> <id>https://apps-apis.google.com/a/feeds/mydomain.com/resource/2.0/-81411918824</id> <updated>1970-01-01T00:00:00.000Z</updated> <category scheme="http://schemas.google.com/g/2005#kind" term="http://schemas.google.com/apps/2006#resource"/> <title type="text">Bldg 3, room 201</title> <link rel="self" type="application/atom+xml" href="https://apps-apis.google.com/a/feeds/mydomain.com/resource/2.0/-81411918824"/> <link rel="edit" type="application/atom+xml" href="https://apps-apis.google.com/a/feeds/mydomain.com/resource/2.0/-81411918824"/> <gd:who valueString="Bldg 3, room 201" email="mydomain.com_2d3831343131393138383234@resource.calendar.google.com"> <gCal:resource id="-81411918824"/> </gd:who> </entry> </feed>

We've found the collection URL for calendar resources! Now, we just need to determine the XML schema for an individual resource. A hour of trial and error results in the following schema:

<?xml version='1.0' encoding='UTF-8'?> <ns0:entry xmlns:ns0="http://www.w3.org/2005/Atom"> <ns0:category scheme="http://schemas.google.com/g/2005#kind" term="http://schemas.google.com/apps/2006#resource" /> <ns1:who valueString="long name" xmlns:ns1="http://schemas.google.com/g/2005"> <ns2:resource id="short name" xmlns:ns2="http://schemas.google.com/gCal/2005" /> </ns1:who> </ns0:entry>

Since Google already does a great job of explaining the GData API, this post will not repeat that information. Instead, you can use the collection URL and entry schema in the same fashion as the other GData APIs to create, read, update, and delete calendar resources.

Tuesday, October 7, 2008

Michael McLaughlin

Overcoming Customer Portal Object Access Limitations Using Proxies

If you have ever tried exposing a Campaign, Contract, Lead, Opportunity,
Pricebook, or Product in Customer Portal you have most likely been met
with the dreaded "Insufficient Permissions" screen. Customer Portal hides
these standard objects for obvious reasons (you don't necessarily want external users to access your organization's most proprietary data), however, there are times when allowing read-access to these objects would facilitate certain operations. For example, it would be great to expose your product catalog (i.e. Product, Pricebook, and PricebookEntry) to your customers. How can read-access be achieved given these limitations? The workaround described below uses what I will call "proxy classes" that can stand in for these blocked standard objects.

The first step to using a proxy class is to create a custom object through Salesforce's administrative control panel. This is your opportunity to create an object that contains the fields you want from the standard object plus any additional fields that might be handy such as a formula field concatenating different values or even fields from other classes that you can get to via an object-to-object relationship. The idea here is to create an object that mimics (closely or completely) the standard object that you are otherwise not able to see in Customer Portal. When creating the proxy object the key is to establish a connection to the blocked standard object. This is done by creating a Lookup field on the proxy object that points to the ID field of the standard object. By creating this Lookup, you have now created a foreign key into the standard object. Now you can access other fields in the standard object by leveraging this relationship. In Apex, you can code RelationshipName__r.OtherField to gain access to the other fields...the Lookup you created is the gateway into the object. Remember to enable permissions on the object for Customer Portal users!

Now that your proxy class is created and it mimics the standard object you need to pump some data into it. For an initial data load, use the Apex Data Loader to 1) export data from the standard object into a CSV 2) manipulate the resulting CSV as necessary and 3) map the exported CSV back into an import for your proxy class. An alternative method would be to write an Apex class that loops through the standard object and inserts the data into the proxy class. Use whatever data loading technique you feel most comfortable with.

Armed with a data-populated proxy class you are now ready to expose this data to Customer Portal. You can use this proxy class in place of your standard object in all of your VisualForce pages, tabs, related lists, etc. You are simply using this proxy class that has permissions in Customer Portal in place of blocked standard object. The data is the same (or even customized depending on how you structured the proxy) but now you can see and work with it.

Finally, you will want to keep your proxy object populated with fresh, current data from the standard object. This can easily be done by adding a trigger to the standard object that updates the proxy. Keep in mind that triggers are not allowed on certain classes (for example, Pricebook and PricebookEntry). A creative workaround is to use a batch update as described here.



Thursday, October 2, 2008

Google Earth Integration via Visualforce

 

The VisualForce "contentType" page attribute makes it easy to push data from Salesforce directly to other apps. Here, we'll review an example using Google Earth. We use KML to view Salesforce Opportunities on a 3D map. Let's start with the page itself:

    <apex:page controller="KMLController" cache="true" showHeader="false" contentType="application/vnd.google-earth.kml+xml">
    <kml xmlns="http://earth.google.com/kml/2.0">
    <Document>
    <name>Salesforce Opportunities</name>
    <apex:repeat value="{!oppList}" var="o">
    <Placemark>
    <name>{!o.Name}</name>
    <address>{!o.Account.BillingStreet} {!o.Account.BillingCity}, {!o.Account.BillingState} {!o.Account.BillingPostalCode}</address>
    <description>
    <![CDATA[
    <p><b>Account: </b>{!o.Account.Name}
    <p><b>Amount: </b>${!o.Amount}
    <p><b>Close Date: </b>{!MONTH(o.CloseDate)}/{!DAY(o.CloseDate)}/{!YEAR(o.CloseDate)}
    ]]>
    </description>
    </Placemark>
    </apex:repeat>
    </Document>
    </kml>
    </apex:page>
    Note the following:
      • The contentType="application/vnd.google-earth.kml+xml" attribute notifies the browser that the page content should be passed to Google Earth.

      • The cache="true" attribute addresses this IE security issue.

      • The meat of the page is in an <apex:repeat> block that iterates over a list of Opportunities. In this example, we're mapping the opportunity address, but you could use the Geocoding API to specify a Point with specific longitude and latitude coordinates

        The page controller retrieves a List of Opportunity objects based on a comma-delimited URL parameter:

        public class KMLController {

        public Opportunity[] oppList {get; set;}

        public KMLController() {

        String sel = '';

        if (null != ApexPages.currentPage().getParameters().get('sel')) {

        sel = ApexPages.currentPage().getParameters().get('sel');

        }

        String[] idList = sel.split(',', 0);

        oppList = [SELECT Id, Name, Amount, CloseDate,

        Account.Name, Account.BillingStreet, Account.BillingCity,

        Account.BillingState, Account.BillingPostalCode

        FROM Opportunity

        WHERE id IN :idList];

        }

        }

        Finally, an Opportunity custom button is used to invoke the VisualForce page, passing a list of selected Opportunity Id's from a List View or Related List:

        var sel = {!GETRECORDIDS( $ObjectType.Opportunity)};

        if (!sel.length) {

        alert("Please select at least one opportunity for mapping.");

        } else {

        var d = new Date(); // Append milliseconds to URL to avoid browser caching

        url= "/apex/KMLPush?ms=" + d.getTime() + "&sel=" + {!GETRECORDIDS( $ObjectType.Opportunity)};

        window.location.href=url;

        }

        When the button is clicked, the selected Opportunities will be displayed (via KML) in Google Earth.

        10-2-2008 8-50-13 PM

        If the KML file doesn't open properly, you might need to manually add the following Windows registry entries:

        [HKEY_LOCAL_MACHINE\SOFTWARE\Classes\MIME\Database\Content Type]

        [HKEY_LOCAL_MACHINE\SOFTWARE\Classes\MIME\Database\Content Type\application/vnd.google-earth.kml+xml]

        "Extension"=".kml"

        [HKEY_LOCAL_MACHINE\SOFTWARE\Classes\MIME\Database\Content Type\application/vnd.google-earth.kmz]

        "Extension"=".kmz"