bits and pieces

February 27, 2014

Google App Engine : Consistently identifying the logged-in user – over Http or Https!

Filed under: Uncategorized — Tags: — roshandawrani @ 3:44 pm

So, you have a secured application on Google App Engine and use its UserService to find out the currently logged-on user is.

It all generally seems to work, except that sometimes users reports hint that UserService.getCurrentUser() returns null, and you are caught wondering “why!”.

A slightly deeper look into one such report couldn’t be avoided today. It turns out that UserService’s identification of the user seems to depend on the presence of cookies “ACSID” or “SACSID” – depending on whether the it was an “http” or “https” URL that triggered the user authentication, and these cookies are not interchangeable. If user authentication gets triggered over HTTPS (and a “SACSID” cookie is issued by Google App Engine), and the user then switches to a “HTTP” application URL in the same session, then this cookie is not sent to the server as it is created with “Secure” attribute, which is supposed to ensure that the cookie is transmitted only over HTTPS connections and not HTTP. Vice-versa, if the user authentication started on HTTP (and a “ACSID” cookie was issued), then upon switching to HTTPS, although the cookie is sent to the server, it’s probably ignored because then the server looks for an SACSID cookie (which looks encrypted / longer). In any case, sending of a “ACSID” cookie over HTTPS doesn’t seem sufficient for UserService to identify the user.

Yes, as an aside, I didn’t also know that just a difference of HTTP vs HTTPS also makes a request cross-origin! :-)

Coming back a bit, such a HTTP / HTTPS switch within a session is the cause of these sudden occurances of UserService.getCurrentUser() returning null and user authorization breaking at such times.

I have been looking for it but haven’t found any app engine specific configuration that comes to help here. In the meantime, just wanted to capture this bit about UserService when a switch between HTTP / HTTPS happens!

February 19, 2014

Android : Removing Raw Contacts When Using Read-Only Sync Adapters

Filed under: Uncategorized — Tags: — roshandawrani @ 3:40 pm

So, for your favorite Android application, you have written a nice Read-Only Sync Adapter that allows you to create application-specific contacts in user’s address book.

Some time passes and now you need the ability to clean them up – perhaps to allow repeated rounds of testing around this “custom contacts” stuff. If you try to remove such a app-specific contact, you are prevented by Android with the following message:


When you try to do it programmatically, you still run into the same situation, with the difference being that this time you don’t get explicitly reminded of this restriction. It’s when you start to see the unwanted side-effects, you realize that the “raw contacts” are not really being deleted – they are being hidden!

So, what do you do? You need to pass some extra information to Android – “CALLER_IS_SYNCADAPTER = true”, so that it allows your application to “really” delete the raw contact.

So, instead of the usual RawContacts uri


you need to use

ContactsContract.RawContacts.CONTENT_URI.buildUpon().appendQueryParameter(ContactsContract.CALLER_IS_SYNCADAPTER, "true").build()

Voila, with this little modification in the contact URI, the contact is now really deleted and not hidden!

Got caught in this trap today. Hope it helps someone avoid it!

September 24, 2013

Github invoking Jenkins Secured With Digest Auth

Filed under: Uncategorized — Tags: , — roshandawrani @ 10:38 am

So, you have a Jenkins setup that is secured with “Digest” based authentication and you’d like the “Post-Receive” service hooks of GitHub to trigger the builds on that Jenkins server?

Well, you’re out of luck. As of mid-September (2013), GitHub service hooks don’t support “digest” based authentication! You can switch the “Jenkins” setup to “Basic” authentication and configure a “Jenkins (GitHub Plugin)” service hook and bundle the authentication information in the following form:


Overall, the steps would be:

  1. Install Github plugin on your Jenkins server.
  2. In the Jenkins job that should be triggered, enable the following Build Trigger : “Build when a change is pushed to GitHub”.
  3. In the user database that Jenkins is setup to authenticate against, setup the user whose authentication information is used in the service hook URL and make sure that it has only the “read” access needed to trigger the builds.
  4. On the GitHub side, in the repository settings, go to Service Hooks -> Jenkins (GitHub plugin), and configure the correct Jenkins URL of the form shown above. GitHub posts a JSON to this URL that identifies the repository that was changed and GitHub Jenkins plugin finds all the jobs that are tied to that repository and tries to invoke their builds.

Another little thing to note is that the GitHub repo URLs configured on the Jenkins jobs need to be of the standard forms, like:

If you use a .ssh config file to setup an SSH alias, as following, let’s say:

# contents of $HOME/.ssh/config
Host github
    User git

and your GitHub repo URL looks like “github:xxx/project.git”, the GitHub plugin cannot expand this URL using the .ssh config and match it to trigger the build. So, be sure to use one of the standard “git” URL patterns!

July 12, 2012

Using C2DM? Time to migrate to Google Cloud Messaging!

Filed under: Uncategorized — Tags: — roshandawrani @ 11:19 am

OK, here is the indisputable evidence for why Google had to urgently announce “Important: C2DM has been officially deprecated as of June 26, 2012!”

Deadline Google had to meet

The Deadline Google had to meet

Somebody at Google probably remembered just in time that Michael J. Fox‘s car was being “push”ed back to the future using Android and it had better be at its best when it was time! It wouldn’t have looked good if Michael J. Fox had to return to 1985 due to a “Device Quota Exceeded” error, would it?

So, that’s the (undeniable) background – all planned by “Doc” decades back! Now, the rest of the mortal souls have to rise to the occasion, let go of the Google’s “beta” C2DM solution and migrate to the new and shiny one – Google Cloud Messaging (GCM).

But, why should we migrate?

  • For new apps that need events pushed to Android devices, there is no choice now! The Sign-up for C2DM has gone now. New applications have to use GCM.
  • Although the existing C2DM based applications will continue to work, there are not going to be quota requests accepted anymore. That’s not so future-proof, right? We don’t want to be limited to the users we currently have – we want our apps to explode! So, it makes sense for existing apps also to migrate.
  • There are many advantages too:
    • No quotas – no annoying DeviceQuotaExceeded / QuotaExceeded errors anymore. Hopefully it will remain so.
    • Availability of “push” stats – for applications published through Google Play, the GCM stats can be now be monitored – how many messages were sent each day, how many new devices registered, etc. More details on ‘how to setup your app for stats’ available here.
    • Richer API / Better Efficiency – Unlike the plain-text body that could be POSTed to the C2DM endpoint, GCM allows both plain-text / JSON formats in the POST body. The JSON format opens up new implementation opportunities like multicasting a particular message to several devices at a time, or allowing multiple senders to push to one particular device at the same time.
    • Helper libraries for client and server development. No need to deal with request/response with GCM end-point at low-level plain-text or JSON level. There are additional goodies built-in like ‘retry logic with exponential back-off.’
    • Payload size limit is 4096 bytes in GCM, compared to 1024 bytes in C2DM.

The code level changes to be done on client and server sides are well documented in the GCM documentation. So, I will just fast-forward to some specific things that I had to address:

   a) Migration approach: C2DM and GCM are not interoperable. So, an application can’t push to a C2DM-registered device through a GCM endpoint, and vice-versa. On the server-side, we need to know whether we are dealing with C2DM-registered devices or GCM-registered devices and push events through the respective endpoints. Hopefully, soon our userbase will switch over to GCM-enabled version of the Android application client, and when it reaches the point when there are no registrations marked as C2DM, we can drop the support for it and complete the migration.

   b) Eclipse specific issues:

  • Installation of the Helper libraries under Android SDK : The option to use for installing these libraries, “Extras > Google Cloud Messaging for Android Library”, does not show-up under the Android SDK Manager until you have Android SDK Tools updated to revision 20 and Android Platform SDK Tools updated to revision 12. This also necessitates the update of ADT plugin to v20.
  • NoClassDefFoundError errors post ADT plugin update : After updating the ADT plugin from v15 to v20, we started getting NoClassDefFoundError for classes coming from external libraries referenced. These jars existed under “/lib” folder. It seems the new ADT plugin looks for them under “/libs” folder. So the jars existing in “/lib” folder are not picked up for dalvik conversion by the new ADT plugin. More details are available here.

Other than these few issues, the migration has been quite an smooth exercise – helper libraries made it much easier.

Hope this information helps someone in C2DM-to-GCM migration. For more assistance, the GCM forum is here.


February 20, 2012

Browser detection for iOS / Android devices in a Grails application

Filed under: Uncategorized — Tags: , — roshandawrani @ 10:42 pm

If you want to detect the details of browser / platform your Grails application is being used from, there is this plugin called browser-detection. However, it seems like it gets it wrong sometimes. Today was one such day for me that forced me to peek closer at it.

It seems the User-Agent header values that you get from iOS and Android devices have information coming in different formats, as shown below:

  • [1] iOS – Mozilla/5.0 (iPod; U; CPU iPhone OS 4_3_3 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8J2 Safari/6533.18.5
  • [2] Android – Android: Mozilla/5.0 (Linux; U; Android 2.3.3; en-gb; GT-I9100 Build/GINGERBREAD) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1

So, if you want to detect the platform, operating system, lanuguage, etc, beware of parsing the information keeping such a difference in mind, something like:

final boolean parsingForAndroid = userAgent.indexOf("Android") > 0)
// osInfo = detail string inside first parantheses, i.e., for Android - "Linux; U; Android 2.3.3; en-gb; GT-I9100 Build/GINGERBREAD"
if(parsingForAndroid) {
    (operatingSystem, sec, platform, language) = osInfo.split("; ") as List
} else {
    (platform, sec, operatingSystem, language) = osInfo.split("; ") as List

It’s not meant to be an extensive improvement over browser-detection plugin. The idea is just to leave a note that such differences exist when you want to detect platform details across different mobile platforms like iOS / Android.


[1] –
[2] –

February 14, 2012

Adding a contact on Android device emulator

Filed under: Uncategorized — Tags: — roshandawrani @ 4:06 pm

Here is something that surprised me a few times, when I tried to add contacts on the Android emulator for some tesing.

I would open the Contacts application, it would say that ([1]) there were no existing contacts, and ask me to add new ones using the menu option “New Contact”. I would enter all the details, press “Done”, and nothing would happen! It would simply come back to the previous screen and again repeat [1].

It seems that it was creating the contact alright, but just not displaying it because the relevant configuration was not done. There is another menu option next to “Next Contact”, called “Display Options”. You need to go there and enable the display of the contacts you are interested in under the category “Choose contacts to display”. Phew!

December 27, 2011

Grails, Geb: Executing multiple functional test phases together

Filed under: Uncategorized — Tags: , , — roshandawrani @ 12:43 pm

A little trick before 2011 goes to books:

Some time back, we had this need to execute some time consuming functional tests that covered Facebook integration for our Grails application. We decided to go ahead and define a separate test phase and not mix these tests with the usual app-level functional tests to not lengthen our build cycles by much. By not mixing up the functional and facebook phase tests, we had more flexibility on when we wanted to run “facebook” tests, and when we didn’t.

The article “Custom Grails Test Types / Phases” is mostly the approach we took for defining the custom phase. In addition, we had to introduce some work arounds to share some test classes between the two phases – as, by default, Grails uses an isolated classloader for each test phase, and doesn’t provide a mechanism to easily share the “common” classes.

There was another issue though, and this little trick is about that issue.

It seems the embedded Tomcat instance that Grails uses, did not get cleanly shutdown at the end of “functional” phase and then restarted at the beginning of “facebook” phase, if both the test phases got executed together. There was some mix-up related to a thread-pool inside the embedded Tomcat, which caused 2nd startup of Tomcat to see the same pool, which was in the “CLOSING” state. Seeing it in that state, it would decide to abort the startup. Adding some delay also did not reliably help.

The way we finally avoided the problem is by not shutting the tomcat down after “funtional” phase, and not trying to start it in the beginning of “facebook” phase, when both the phases are run together. If only one of these phases is executed, then Tomcat start-up and shutdown happens normally. Here are the relevant code snippets:

def originalFunctionalTestPhaseCleanUp

eventTestPhaseStart = { phase ->
    // record which phase is being run currently
    testPhaseBeingRun = phase
eventAllTestsStart = {
    /* replace the Grails' original functionalTestPhaseCleanUp call so that we can decide for ourselves whether to do it or leave it upto facebook phase */
    originalFunctionalTestPhaseCleanUp = owner.functionalTestPhaseCleanUp
    owner.functionalTestPhaseCleanUp = myFunctionalTestPhaseCleanUp
addFacebookTestPhase = {
    // define our custom 'facebook' phase with Grails, as described in LD's article mentioned above.

// callback made by Grails for phase as "${phase}TestPhaseCleanUp"() 
facebookTestPhasePreparation = {
    // Skip starting the app, if 'functional' phase is also run, as it is already started by functional phase by this point.
    if(testPhaseBeingRun == 'facebook' && !filteredPhases.containsKey('functional')) {

// override the default functionalTestPhaseCleanUp behavior, so that we do the clean-up only once
myFunctionalTestPhaseCleanUp = {
    // Skip shutting down the app if 'facebook' phase is also run. It will be shutdown at the end of facebook phase.
    if(testPhaseBeingRun == 'functional' && !filteredPhases.containsKey('facebook')) {
facebookTestPhaseCleanUp = {
    // finally the usual cleanup that is done after functional phase.

Hope it helps someone having similar needs / issues.

November 10, 2011

Using Geb based functional tests in a “custom” Grails test phase

Filed under: Uncategorized — Tags: , , — roshandawrani @ 12:37 pm

Forget it! It’s not supported (at least as of Geb 0.6.0). Just spent my morning investigating it.

I had this need to setup a separate set of functional tests (functional by nature, not in terms of Grails functional test phase packaging) that needed some special external interfaces that were not always available. A “custom” test phase seemed perfect for my needs that would include these tests that would not run as part of ‘> grails test-app’ or ‘> grails test-app functional:’, but would run just when and where I needed them – maybe once a day, after making sure that its external interfaces up – ‘> grails test-app custom:’.

Following Luke Daley’s very helpful article “Custom Grails Test Types / Phases”, I setup a custom test phase and started writing some Geb-Spock plugin based tests, only to find out that Geb is married to the test phase name “functional”. So there doesn’t seem be an opportunity to use Geb in a custom Grails test phase. :-(

Update 1 (23-Nov-11): The following issue has been filed to configure Geb to work with other test phases:

September 30, 2011

Grails, Cassandra: Giving each test a clean DB to work with

Filed under: Uncategorized — Tags: , — roshandawrani @ 1:26 pm

If you are using Grails, you must be used to the ease with which your integration tests do not have to bother about what other tests are doing in the application database. Grails achieves it by starting a new transaction before each test and rolls it back when the test gets over – whether it passes or fails. So, the actual changes made to data are never written to the database and you are happy to have this isolation between tests, as it allows you to focus one each test independently instead of worrying about side-effects.

But what if the database you used did not support transactions? How do you achieve the same isolation then?

For such a database – Cassandra – it can be achieved by cleaning up after each integration test the data from all the Column Families (tables) that the Keyspace (schema) had (Ability to hook into Grails events made it quite easy). Not every test makes too many changes in the database, so it can work out quite OK – obviously, not as fast as Hibernate + RDBMS combo, but acceptable.

The steps I used:

  • Describe the keyspace and find out all the column families in it
  • For each column family you want to clean-up data from:
    • Do range queries on the column family, skipping any phantom records found (Use Pagination, if needed)
    • Collect all valid row keys
    • Do a batched mutation for all the row keys to clean-up the column family data

So, all worked well, but it seemed like too much clean-up work. I looked around and found that Cassandra supported a “truncate” operation, which was sadly missing in Hector API, so I promptly requested for the API gap to be filled. The “truncate” operation was soon made available in Hector API and it cut down our clean-up code from 20-lines of doing range-scans-and-picking-up-keys-to-delete to one-liner “truncate” call.

Over time, the schema grew (more column families, indexes) as well as the application (more integration tests needing more clean-up cycles) and the integration tests that used to finish in seconds started taking many minutes. For some time I doubted that maybe Cassandra upgrades have introduced some new configuration that should be tweaked, but then I measured. To my utmost surprise, it showed that each invocation of the one-liner “truncate” call had started spending 4-5 seconds kind of time in Hector + Cassandra, and in an overall integration tests cycle of 5.5 minutes, 5 minutes were in clean-up and 0.5 in actual tests :-)

We immediately switched back to olden ways of doing range-scans-and-batch-mutations and here is the difference in overall clean-up time (nearly 75 invocations in the whole integration test phase): 5 minutes vs 0.05 minutes! So, we are happily back to our tests taking 0.55 minute instead of 5.5 minutes!

Please, please do not use Hector + Cassandra’s “truncate” calls frequently to provide your tests a clean DB slate in your Grails / Java applications! Its performance is bad!

Also, sometimes 20 lines of code can be better than a one-liner. :-)

September 13, 2011

Better serialization of Groovy objects using XStream

Filed under: Uncategorized — Tags: — roshandawrani @ 1:24 pm

This blog post is to resolve a little disconnect between Groovy and XStream serialization library:

  • Groovy makes use of synthetic members quite a bit, and sometimes it also adds synthetic fields to classes it compiles.
  • XStream skips only static and transient data members (as of the latest version 1.4.1), but not synthetic ones, resulting in a little inconvenience that Groovy’s compiler-provided synthetic members also show up in serialized form.

Here is an example, where we try to serialize a Groovy object into JSON format:

import groovy.transform.Immutable
import com.thoughtworks.xstream.XStream

@Grab(group='com.thoughtworks.xstream', module='xstream', version='1.4.1')
XStream xstream = new XStream(new JsonHierarchicalStreamDriver())

def person = new Person(firstName: 'roshan', lastName: 'dawrani')
println xstream.toXML(person)

class Person {
	String firstName
	String lastName

The above code outputs the following because AST transformation done by compiler for @Immutable adds some extra fields for its internal use, such as “$print$names” and “$hash$code”, and it can be a little confusing or annoying to see these unknown data members mixed up with your regular ones.

{"Person": {
  "firstName": "roshan",
  "lastName": "dawrani",
  "$print$names": true,
  "$hash$code": 0

In the following code, we try to remove the disconnect between Groovy and XStream by introducing a custom converter that enhances the filter applied by XStream on data members and excludes the synthetic ones as well:

import groovy.transform.Immutable

import java.lang.reflect.Field
import java.lang.reflect.Modifier

import com.thoughtworks.xstream.XStream
import com.thoughtworks.xstream.converters.reflection.PureJavaReflectionProvider
import com.thoughtworks.xstream.converters.reflection.ReflectionConverter
import com.thoughtworks.xstream.mapper.Mapper

@Grab(group='com.thoughtworks.xstream', module='xstream', version='1.4.1')
XStream xstream = new XStream(new JsonHierarchicalStreamDriver())
xstream.registerConverter(new GroovyObjectConverter(xstream.mapper))

def person = new Person(firstName: 'roshan', lastName: 'dawrani')
println xstream.toXML(person)

class Person {
    String firstName
    String lastName

class GroovyObjectConverter extends ReflectionConverter {
    GroovyObjectConverter(Mapper mapper) {
        super(mapper, new GroovyObjectReflectionProvider())

    boolean canConvert(Class type) {


class GroovyObjectReflectionProvider extends PureJavaReflectionProvider {
    protected boolean fieldModifiersSupported(Field field) {
        int modifiers = field.getModifiers()
        super.fieldModifiersSupported(field) && !Modifier.isSynthetic(modifiers)

With this technique the output is correctly shown as:

{"Person": {
  "firstName": "roshan",
  "lastName": "dawrani"

Hope the technique is useful to some of you.

Older Posts »

The Silver is the New Black Theme. Create a free website or blog at


Get every new post delivered to your Inbox.

%d bloggers like this: