Can NGINX mirror the HTTP request on multiple destinations?

The short answer seems NO.

Why would one want to mirror the request incoming to one NGINX proxy on even one additional destination, let alone more? Well, maybe because you want to receive the production data on a test server temporarily, as it comes in – for some sort of analysis, or for creating some test data set?

Say, the normal setup is that you have an NGINX proxy sitting in front of a Java web server, and what you want to achieve is that in addition to sending the request to the main Java web server, the NGINX proxy should send 2 more copies to two extra Java web servers.

Forwarding to one additional destination is easy. On the main server (say,, configure the following in NGINX:

location ~ ^/my.url {
  proxy_pass http://$my_java_web_server;
  proxy_set_header Host $my_java_web_server;

  post_action @post_for_first_mirroring;

location @post_for_first_mirroring {
  proxy_ignore_client_abort on;
  proxy_pass http://my.test1.server:8080;

Whatever HTTP request you get on, will be forwarded to your additional destination http://my.test1.server:8080/my.url too.

Unfortunately, the same doesn’t work if you want to send two copies of the request to two different destinations, say.

If you try to configure more than one “post_action” directives in your location block, NGINX rejects it in its config-validation cycle saying “post_action” directive is duplicate.

post_action @post_for_first_mirroring;

post_action @post_for_second_mirroring;

If you try the following variant, it’s even more weird:

post_action @post_for_first_mirroring;

if ($some_flag_set_to_Yes = 'Yes') {
    post_action @post_for_second_mirroring;

Now it mirrors the request on the second destination, but not the first one and I found it quite interesting that instead of complaining clearly, it was silently executing only the second @post_action.

Just wanted to capture this bit of information here, in case anyone has similar needs.


Connecting From GCE / GAE (Java) to Google Cloud SQL

Started using Google Cloud SQL couple of days back. While it was easy to connect to the database from app engine application, it became pain for a few hours last evening to figure out how to connect the app on the compute engine instances. Now that I am clearer on how to connect to Cloud SQL databases from both these environments, I wanted to leave a small write-up here.

1) Connecting from App Engine (GAE) to Cloud SQL:

A) Authorization between App Engine and DB instances

  • If your GAE app and the DB instances are under the same project, there is no need to worry about setting up the authorization between the two. It happens under the hoods, using the default service account of the app engine app.
  • If your GAE app is under a different project than the DB instance’s, then you need to find out the default service account for app engine project and add it as a member with Editor role in the project that owns the DB instance. You need to use “IAM & Admin” section on the Google Cloud Platform project for both these actions.

B) Establishing JDBC Connection

  • Before you can make JDBC connections, you need to create a user at the database-level and grant the appropriate permissions. Say, we create a user with details test_user/test_password.
  • Find out the Cloud SQL DB instance’s connection name. You will find it under “Instance connection name” entry, if you go to the db instance’s details page on the Google Cloud Dashboard. It takes the form “<project id>:<time zone>:<instance id>”
  • The JDBC driver to use is –

Nothing to be installed for it. On, App Engine, it’s available by default.

  • The connection string to use is –
"jdbc:google:mysql://<project id>:<time zone>:<instance id>/<db_name>?user=test_user&password=test_password"

The GoogleDriver takes care of understanding this URL and establishing the connection.

2 a) Connecting from GCE (Compute Engine) to Cloud SQL – Using Google Cloud Proxy

A) Authorization between GCE and DB instances

  • If your GCE instance has a static IP address, the communication between the GCE instance and DB instance needs to be pre-authorized by IP address. It can be done by going to the Cloud SQL instance’s details page on the Cloud Dashboard, opening “Access Control -> Authorisation” section and add an entry there to whitelist the GCE instance’s IP address.
  • If your GCE instace doesn’t have a static IP address (perhaps, it’s part of an auto-scalabale instance-pool that sits behind a load-balancer), then you cannot do the pre-authorization by adding the IP address. Your only option is to use Google Cloud SQL Proxy.

Using the proxy takes care of the following:

  –  authorization using the GCE instance’s default service account

  –  opens a tunnel between the localhost’s 3306 port to the DB instance identified by “<project id>:<time zone>:<instance id>”. Remember that the DB could be in the same project, or in another project that your GCE service account has Editor access to.

B) Establishing JDBC Connection

  • The Stock Mysql JDBC driver needs to be used now –
  • The connection string to use is –

Behind the scenes, Cloud SQL proxy connects localhost’s 3306 port safely to the Cloud SQL instance.

C) Setting up the Cloud SQL Proxy

The relevant steps for downloading/starting the proxy are reproduced below from


mv cloud_sql_proxy.linux.amd64 cloud_sql_proxy

chmod +x cloud_sql_proxy

./cloud_sql_proxy -instances=<project id>:<time zone>:<instance id>=tcp:3306

If Your GCE instance is part of an instance-pool that grows/shrinks based on the scale, then you need to automate the above steps in the instance’s start-up script.

Also, if you don’t want to use 3306 locally. You can choose another free port and use the same in starting the proxy and in the JDBC Connection URL.

EDIT (23-Sep-2016): It seems that there is one more way to make GCE to Google Cloud SQL connections, using “cloud-sql-mysql-socket-factory”. Added the next section 2 b) to cover its details.

2 b) Connecting from GCE (Compute Engine) to Cloud SQL – Using “cloud-sql-mysql-socket-factory”

A) Authorization between GCE and DB instances

Remains same as when you use Google Cloud SQL Proxy. It’s handled by this custom socket factory library behind the scenes.

B) Establishing JDBC Connection

  • The Stock Mysql JDBC driver needs to be used –
  • The connection string to use is –
"jdbc:mysql://google/<db_name>?cloudSqlInstance=<project id>:<time zone>:<instance id>&"

IMO, here are some potential pros and cons of using this library vs the Google Cloud SQL Proxy:

Pros: No complication of downloading / installing / starting the Google Cloud Sql Proxy in the instance start-up scripts.

Cons: Relatively new library. Born just recently in May-2016. Not sure how battle-ready we should assume it to be, and whether it’ll be constantly performant on scale.

That’s all. Hope it helps someone. Cheers.

Android : Simulating SMS receiving from any number

There may be times when you want to test receiving SMS on a real device without actually sending the SMSes – perhaps because you don’t own those particular sender numbers but want to check your app’s response on receiving the SMS, or because you don’t want to incur the cost (the sender numbers could be international, for instance).

This code below is not a fresh solution but re-application in 2014 of the trick described in this Japanese article written in 2010. With some minor changes, it still worked on my Samsung Galaxy S-II (Android 4.0.3).

So, here is the sample code:

Permission to be added in AndroidManifest.xml, so your app can prepare and broadcast an SMS:

<uses-permission android:name="android.permission.BROADCAST_SMS" />

The relevant code:

public class SendSmsActivity extends Activity {

    private static String TAG = "SendSmsActivity";

    public void onCreate(Bundle savedInstanceState) {

    private void handleSmsSending() {
        try {
            sendSms(this, "09000000000", "hello there!");

            Log.d(TAG, "Sent the test sms");
        } catch (Exception e) {
            Log.d(TAG, "Exception : " + e.getClass() + " : " + e.getMessage());


    private void sendSms(Context context, String sender, String body) throws Exception {
        byte[] pdu = null;
        byte[] scBytes = PhoneNumberUtils.networkPortionToCalledPartyBCD("0000000000");
        byte[] senderBytes = PhoneNumberUtils.networkPortionToCalledPartyBCD(sender);
        int lsmcs = scBytes.length;
        byte[] dateBytes = new byte[7];
        Calendar calendar = new GregorianCalendar();
        dateBytes[0] = reverseByte((byte) (calendar.get(Calendar.YEAR)));
        dateBytes[1] = reverseByte((byte) (calendar.get(Calendar.MONTH) + 1));
        dateBytes[2] = reverseByte((byte) (calendar.get(Calendar.DAY_OF_MONTH)));
        dateBytes[3] = reverseByte((byte) (calendar.get(Calendar.HOUR_OF_DAY)));
        dateBytes[4] = reverseByte((byte) (calendar.get(Calendar.MINUTE)));
        dateBytes[5] = reverseByte((byte) (calendar.get(Calendar.SECOND)));
        dateBytes[6] = reverseByte((byte) ((calendar.get(Calendar.ZONE_OFFSET) +
                calendar.get(Calendar.DST_OFFSET)) / (60 * 1000 * 15)));

        ByteArrayOutputStream bo = new ByteArrayOutputStream();
        bo.write((byte) sender.length());

        String sReflectedClassName = "";
        Class cReflectedNFCExtras = Class.forName(sReflectedClassName);
        Method stringToGsm7BitPacked = cReflectedNFCExtras.getMethod("stringToGsm7BitPacked", new Class[] { String.class });
        byte[] bodybytes = (byte[]) stringToGsm7BitPacked.invoke(null, body);
        pdu = bo.toByteArray();

        // broadcast the SMS_RECEIVED to registered receivers
        broadcastSmsReceived(context, pdu);

        // or, directly send the message into the inbox and let the usual SMS handling happen - SMS appearing in Inbox, a notification with sound, etc.
        startSmsReceiverService(context, pdu);

    private void broadcastSmsReceived(Context context, byte[] pdu) {
        Intent intent = new Intent();
        intent.putExtra("pdus", new Object[] { pdu });

    private void startSmsReceiverService(Context context, byte[] pdu) {
        Intent intent = new Intent();
        intent.setClassName("", "");
        intent.putExtra("pdus", new Object[] { pdu });
        intent.putExtra("format", "3gpp");

    private byte reverseByte(byte b) {
        return (byte) ((b & 0xF0) >> 4 | (b & 0x0F) << 4);

Hope it helps someone with similar needs.

Google App Engine : Consistently identifying the logged-in user – over Http or Https!

So, you have a secured application on Google App Engine and use its UserService to find out the currently logged-on user is.

It all generally seems to work, except that sometimes users reports hint that UserService.getCurrentUser() returns null, and you are caught wondering “why!”.

A slightly deeper look into one such report couldn’t be avoided today. It turns out that UserService’s identification of the user seems to depend on the presence of cookies “ACSID” or “SACSID” – depending on whether the it was an “http” or “https” URL that triggered the user authentication, and these cookies are not interchangeable. If user authentication gets triggered over HTTPS (and a “SACSID” cookie is issued by Google App Engine), and the user then switches to a “HTTP” application URL in the same session, then this cookie is not sent to the server as it is created with “Secure” attribute, which is supposed to ensure that the cookie is transmitted only over HTTPS connections and not HTTP. Vice-versa, if the user authentication started on HTTP (and a “ACSID” cookie was issued), then upon switching to HTTPS, although the cookie is sent to the server, it’s probably ignored because then the server looks for an SACSID cookie (which looks encrypted / longer). In any case, sending of a “ACSID” cookie over HTTPS doesn’t seem sufficient for UserService to identify the user.

Yes, as an aside, I didn’t also know that just a difference of HTTP vs HTTPS also makes a request cross-origin!🙂

Coming back a bit, such a HTTP / HTTPS switch within a session is the cause of these sudden occurances of UserService.getCurrentUser() returning null and user authorization breaking at such times.

I have been looking for it but haven’t found any app engine specific configuration that comes to help here. In the meantime, just wanted to capture this bit about UserService when a switch between HTTP / HTTPS happens!

Android : Removing Raw Contacts When Using Read-Only Sync Adapters

So, for your favorite Android application, you have written a nice Read-Only Sync Adapter that allows you to create application-specific contacts in user’s address book.

Some time passes and now you need the ability to clean them up – perhaps to allow repeated rounds of testing around this “custom contacts” stuff. If you try to remove such a app-specific contact, you are prevented by Android with the following message:


When you try to do it programmatically, you still run into the same situation, with the difference being that this time you don’t get explicitly reminded of this restriction. It’s when you start to see the unwanted side-effects, you realize that the “raw contacts” are not really being deleted – they are being hidden!

So, what do you do? You need to pass some extra information to Android – “CALLER_IS_SYNCADAPTER = true”, so that it allows your application to “really” delete the raw contact.

So, instead of the usual RawContacts uri


you need to use

ContactsContract.RawContacts.CONTENT_URI.buildUpon().appendQueryParameter(ContactsContract.CALLER_IS_SYNCADAPTER, "true").build()

Voila, with this little modification in the contact URI, the contact is now really deleted and not hidden!

Got caught in this trap today. Hope it helps someone avoid it!

Github invoking Jenkins Secured With Digest Auth

So, you have a Jenkins setup that is secured with “Digest” based authentication and you’d like the “Post-Receive” service hooks of GitHub to trigger the builds on that Jenkins server?

Well, you’re out of luck. As of mid-September (2013), GitHub service hooks don’t support “digest” based authentication! You can switch the “Jenkins” setup to “Basic” authentication and configure a “Jenkins (GitHub Plugin)” service hook and bundle the authentication information in the following form:


Overall, the steps would be:

  1. Install Github plugin on your Jenkins server.
  2. In the Jenkins job that should be triggered, enable the following Build Trigger : “Build when a change is pushed to GitHub”.
  3. In the user database that Jenkins is setup to authenticate against, setup the user whose authentication information is used in the service hook URL and make sure that it has only the “read” access needed to trigger the builds.
  4. On the GitHub side, in the repository settings, go to Service Hooks -> Jenkins (GitHub plugin), and configure the correct Jenkins URL of the form shown above. GitHub posts a JSON to this URL that identifies the repository that was changed and GitHub Jenkins plugin finds all the jobs that are tied to that repository and tries to invoke their builds.

Another little thing to note is that the GitHub repo URLs configured on the Jenkins jobs need to be of the standard forms, like:

If you use a .ssh config file to setup an SSH alias, as following, let’s say:

# contents of $HOME/.ssh/config
Host github
    User git

and your GitHub repo URL looks like “github:xxx/project.git”, the GitHub plugin cannot expand this URL using the .ssh config and match it to trigger the build. So, be sure to use one of the standard “git” URL patterns!

Using C2DM? Time to migrate to Google Cloud Messaging!

OK, here is the indisputable evidence for why Google had to urgently announce “Important: C2DM has been officially deprecated as of June 26, 2012!”

Deadline Google had to meet

The Deadline Google had to meet

Somebody at Google probably remembered just in time that Michael J. Fox‘s car was being “push”ed back to the future using Android and it had better be at its best when it was time! It wouldn’t have looked good if Michael J. Fox had to return to 1985 due to a “Device Quota Exceeded” error, would it?

So, that’s the (undeniable) background – all planned by “Doc” decades back! Now, the rest of the mortal souls have to rise to the occasion, let go of the Google’s “beta” C2DM solution and migrate to the new and shiny one – Google Cloud Messaging (GCM).

But, why should we migrate?

  • For new apps that need events pushed to Android devices, there is no choice now! The Sign-up for C2DM has gone now. New applications have to use GCM.
  • Although the existing C2DM based applications will continue to work, there are not going to be quota requests accepted anymore. That’s not so future-proof, right? We don’t want to be limited to the users we currently have – we want our apps to explode! So, it makes sense for existing apps also to migrate.
  • There are many advantages too:
    • No quotas – no annoying DeviceQuotaExceeded / QuotaExceeded errors anymore. Hopefully it will remain so.
    • Availability of “push” stats – for applications published through Google Play, the GCM stats can be now be monitored – how many messages were sent each day, how many new devices registered, etc. More details on ‘how to setup your app for stats’ available here.
    • Richer API / Better Efficiency – Unlike the plain-text body that could be POSTed to the C2DM endpoint, GCM allows both plain-text / JSON formats in the POST body. The JSON format opens up new implementation opportunities like multicasting a particular message to several devices at a time, or allowing multiple senders to push to one particular device at the same time.
    • Helper libraries for client and server development. No need to deal with request/response with GCM end-point at low-level plain-text or JSON level. There are additional goodies built-in like ‘retry logic with exponential back-off.’
    • Payload size limit is 4096 bytes in GCM, compared to 1024 bytes in C2DM.

The code level changes to be done on client and server sides are well documented in the GCM documentation. So, I will just fast-forward to some specific things that I had to address:

   a) Migration approach: C2DM and GCM are not interoperable. So, an application can’t push to a C2DM-registered device through a GCM endpoint, and vice-versa. On the server-side, we need to know whether we are dealing with C2DM-registered devices or GCM-registered devices and push events through the respective endpoints. Hopefully, soon our userbase will switch over to GCM-enabled version of the Android application client, and when it reaches the point when there are no registrations marked as C2DM, we can drop the support for it and complete the migration.

   b) Eclipse specific issues:

  • Installation of the Helper libraries under Android SDK : The option to use for installing these libraries, “Extras > Google Cloud Messaging for Android Library”, does not show-up under the Android SDK Manager until you have Android SDK Tools updated to revision 20 and Android Platform SDK Tools updated to revision 12. This also necessitates the update of ADT plugin to v20.
  • NoClassDefFoundError errors post ADT plugin update : After updating the ADT plugin from v15 to v20, we started getting NoClassDefFoundError for classes coming from external libraries referenced. These jars existed under “/lib” folder. It seems the new ADT plugin looks for them under “/libs” folder. So the jars existing in “/lib” folder are not picked up for dalvik conversion by the new ADT plugin. More details are available here.

Other than these few issues, the migration has been quite an smooth exercise – helper libraries made it much easier.

Hope this information helps someone in C2DM-to-GCM migration. For more assistance, the GCM forum is here.