Writing a GraphQL DSL in Kotlin

Reading Time: 3 minutes

I’ve recently spent some time testing a GraphQL endpoint against a few queries. At the moment I’m keeping my queries as multi-line strings but I was wondering:

How hard would it be to build a GraphQL query DSL in Kotlin?

I thought this could be a good opportunity to become more familiar with Kotlin DSL capabilities.

Here’s what I’ve got so far.

query("theQuery") {
    "allUsers"("type" to "users", "limit" to 10) {
        select("name")
        select("address") {
            select("city")
            select("street"("postCode" to true))
        }
    }
}

The above snippet produces the following result:

query theQuery {
    allUsers(type: "users", limit: 10) {
        name
        address {
            city
            street(postCode: true)
        }
    }
}

The main challenges I have faced up to this point have been around supporting:

  • any string to be used as the root field of the query (e.g."allUsers")
  • nested selection of fields
  • a map-like syntax for field arguments (I’ve settled for the to method for now)

Any String is a Field

As you can see from the above example, it is possible to start the root field declaration using a string, followed by the fields selection:

"allUsers" {
    select("name")
    select("title")
}

I’ve achieved that thanks to the Invoke operator overloading support. Read on to find out how I have implemented it.

String.invoke to the rescue

The incredibly powerful extensions support helps me define my own implementation of invoke on a String.

operator fun String.invoke(block: Field.Builder.() -> Unit) =
    Field.Builder().apply {
        this@apply.block()
    }.withName(this)

This way, any String instance can be turned into a Field.Builder by passing a block to the invoke operator (). Additionally, Kotlin’s compact syntax, saves us from having to explicitly use the open and close parenthesis, making the result a little more readable.

Select-ing sub-fields

"allUsers" {
    select("name")
    select("title")
}

Inside the declared root field, a sequence of select instructions informs the current field builder about which sub-fields we are interested in. The way this is achieved is by letting the compiler know that we are in the context of a Field.Builder and that any method specified in the block has to be resolved against it. This is possible thanks to function literals with receiver.

Function literals with receiver

This is probably the most useful feature Kotlin has to offer when it comes to building DSLs.

operator fun String.invoke(block: Field.Builder.() -> Unit)

The block argument has been declared as Field.Builder.() -> Unit.
As we can see from the docs:

[…] Kotlin provides the ability to call an instance of a function type with receiver providing the receiver object.


Function literals with receiver – Kotlin reference

What this means is that I can invoke the block having the current Field.Builder instance as receiver resulting in the select invocations to be being resolved against it.

Field arguments

When it comes to specifying field arguments, I’ve had to settle for that not-so-pretty to syntax.

"type" to "users", "limit" to 10

I still think it’s a good compromise considering that Kotlin doesn’t offer much more when it comes to map-building syntax.

"allUsers"("type" to "users", "limit" to 10) {
    select("name")
    select("address") {
        select("city")
        select("street"("postCode" to true))
    }
}

The to method that allows for that comes from the standard library.

public infix fun <A, B> A.to(that: B): Pair<A, B> = Pair(this, that)

Note that the infix keyword is what allows for the simplified notation receiver method argument.

Finally, a slightly more complicated definition of String.invoke accepts instances of Pair<String, T> allowing for the to syntax to be used when specifying field arguments. The explicit String type as the left type helps keeping it all a little more robust.

operator fun <T> String.invoke(vararg args: Pair<String, T>, block: (Field.Builder.() -> Unit)? = null): Field.Builder

Wrapping up

As you can see, I’m not a DSL expert (at all!) but this is a fun experiment to play with. You can follow my work at the graphql-forger repo. Please, feel free to contribute by opening issues or pull requests.

I hope you have enjoyed the post and learnt something new about Kotlin.

Testing LiveData on Android

Reading Time: 3 minutes

Testing LiveData represents an interesting challenge due to the peculiarities of its technology and the way it eases development for your Android app.

I’ve recently started to build an Android app to keep motivated on my journey to learn Kotlin. My most recent experience has been with Architecture Components and this brief blog post, in particular, will focus on unit testing your DAO when using LiveData.

What is LiveData?

LiveData is a lifecycle-aware, observable data holder that will help you react to changes in you data source. In my case, I’m using it in combination with Room to make sure my app reacts to the new data becoming available in my database.

Our simple DAO

For this blog post let’s just pretend we have a very simple DAO that looks like the following:

This is just a DAO that helps you fetch posts.

As you can see, the return type for the function is not just a plain List<Post> but it wraps it in a LiveData instance. This is great because we can get an instance of our list of posts once and then observe for changes and react to them.

Let’s test it

The Android Developer documentation has a neat example on how to unit test your DAO:

This pretty simple test has the aim of testing that the data is being exposed correctly and that, once the post is added to the database, it is reflected by the getAll() invocation.

Unfortunately, by the time we are asserting on it, the value of the LiveData instance will not be populated and will make our test fail. This is because LiveData uses a lifecycle-oriented asynchronous mechanism to populate the underlying data and expects an observer to be registered in order to inform about data changes.

Observe your data

LiveData offers a convenient observe method that allows for observing the data as it changes. We can use it to register an observer that will assert on the expected value.

The observe method has the following signature:

void observe(LifecycleOwner owner, Observer<T> observer)

It expects an observer instance and the owner for its life-cycle. In our case, we’re only interested in keeping the observer around just enough to be able to assert on the changed data. We don’t want the same assertion to be evaluated every time the data changes.

Own your lifecycle

What we can do, then, is build an observer instance that owns its own life-cycle. After handling the onChange event we will mark the observer life-cycle as destroyed and let the framework do the rest.

Let’s see what the observer code looks like:

This observer implementation accepts a lambda that will be executed as part of the onChange event. As soon as the handler is complete, its own lifecycle will proceed to mark itself as ON_DESTROY which will trigger the removal process from the LiveData instance.

We can then create an extension on LiveData to leverage this kind of observer:

fun <T> LiveData<T>.observeOnce(onChangeHandler: (T) -> Unit) { 
    val observer = OneTimeObserver(handler = onChangeHandler) 
    observe(observer, observer)
}

Let’s test it again

A couple of things to notice this time.

First of all, we’re taking advantage of an InstantTaskExecutorRule. This is a helpful utility rule that takes care of swapping the background asynchronous task executor with a synchronous one. This is vital to be able to deterministically test our code. (Check this out if you wanna know more about JUnit rules).

In addition to that, we’re now leveraging the LiveData extension that we have written to write our assertions:

postDao.getAll().observeOnce {
    assertEquals(0, it.size)
}

We have just asserted in a much more compact and expressive way by leaving all the details inside our observer implementation. We are now asserting on the LiveData instance in a deterministic way making this kind of tests easier to read and write.

Conclusion

I hope this post will help you write tests for your DAO more effectively. This is one of my earliest experiences with Kotlin and Android: please, feel free to comment with better approaches to solve this.

Follow me on Twitter for more!

Cover photo by Irvan Smith on Unsplash

Email development with React and Webpack

Reading Time: 5 minutesEmail development is often an overlooked practice due to the peculiarities and constraints that most email clients impose.

If you want to deliver a delightful user experience to your product you have to maintain its design consistent across all the media it is consumed from.

An email is one way your product might be consumed from. It is therefore important that you craft your emails sticking to the same design principles you would follow when developing your product in other contexts.

The email world, though, has its own peculiarities, and the constraints the emails have to be built within often lead to design compromises.

My personal experience with email development

A project I was recently involved in had delivered a new look and feel to the product but it required an email to be sent out about it.

We wanted our email to take advantage of the new designs too and, ideally, also of the tools and components we had already built for delivering the new UI.

For this reason, we decided to build an email development pipeline that would help us achieve this goal.

In this blog post I’m going to focus on the use of React and webpack to build HTML templates to be sent as emails.

Even though I mentioned React, the key aspect of this project is really how webpack has been configured to come up with a small email development environment. I’m pretty sure the same configuration can be easily adapted to work with other frameworks.

The constraints

This project had to allow us to re-use React components in our email development. We also wanted to structure it so that it would allow for more than one email template to reuse our existing components library.

Finally, we wanted to be able to produce, as output, a single, standalone HTML file.

Let’s get started

I created and pushed an example project on GitHub. Check it out if you’re impatient 🙂

The project structure

We’re gonna structure the project in a way that allows multiple email templates to be crafted and hosted in the same repository.

.
├── output/
├── package.json
├── README.md
├── src/
│   ├── components/
│   │   └── SectionOutline/
│   │       ├── index.js
│   │       └── index.scss
│   ├── index.js
│   └── templates/
│       └── HelloWorld/
│           ├── index.js
│           ├── index.scss
│           └── index.test.js
├── webpack.config.js
└── yarn.lock
  • The templates folder will contain all the email templates that will be built, pre-rendered and published into the output folder
  • The components folder will contain all the reusable ReactJS components you want to reuse across templates
  • The output folder contains the resulting HTML output of the template you chose to build

webpack.config.js

The webpack configuration file is probably the most important bit in this project. The build configuration will take care of picking the right template to build by injecting all the information it needs and pre-rendering it to HTML.

Let’s start with the header of our webpack.config.js.

const path = require('path');
const webpack = require('webpack');
const HtmlWebpackPlugin = require('html-webpack-plugin');
const PrerenderSPAPlugin = require('prerender-spa-plugin');
const HTMLInlineCSSWebpackPlugin = require('html-inline-css-webpack-plugin').default;

As you can see, the configuration is importing a few useful plugins.

In particular, PrerenderSPAPlugin, will take care of pre-rendering the whole template and generate a static HTML out of it. This is achieved using puppeteer behind the scenes.

Another important bit, here, is HTMLInlineCSSWebpackPlugin that will help us convert our css into an internal <style> node within our generated HTML. This is particularly useful for emails.

A dynamic entry

We want to be able to compile a single template out of all the ones available in the templates folder. In order to do this, we will create a function that will return the configuration hash to be exported for the webpack configuration.

const config = (env) => {
  return ({
    mode: 'production',
    entry: {
      // this will be filled in dynamically
    },
    output: {
      filename: '[name].js',
      path: path.join(__dirname, 'output'),
    },
    module: {
      rules: [
        {
          test: /\.(scss|css)$/,
          use: [
            'css-loader',
            'sass-loader',
          ],
        },
        {
          test: /\.js$/,
          exclude: /node_modules/,
          use: {
            loader: 'babel-loader',
          },
        },
      ],
    },
    plugins: [
      new PrerenderSPAPlugin({
        staticDir: path.join(__dirname, 'output'),
        indexPath: path.join(__dirname, 'output', `${env.entry}.html`),
        routes: ['/'],
        postProcess: context => Object.assign(context, { outputPath: path.join(__dirname, 'output', `${env.entry}.html`) }),
      }),
      new HtmlWebpackPlugin({
        filename: `${env.entry}.html`,
        chunks: [env.entry],
      }),
      new HTMLInlineCSSWebpackPlugin(),

      // the DefinePlugin helps us defining a const
      // variable that will be 'visible' by the JS code
      // we are rendering at runtime
      new webpack.DefinePlugin({
        "EMAIL_TEMPLATE": JSON.stringify(env.entry),
      }),
    ],
  });
};

As you can see, the mandatory entry field has not been set yet. We will handle it later, and we will require it to be passed in by command line.

The entry will be used to load the right template by passing its value to the PrerenderSPAPlugin. It will also be used to tell HtmlWebpackPlugin how to name the result file.

Finally, we export the configuration the way webpack expects it to be exported:

module.exports = (env) => {
  const entry = {};
  entry[env.entry] = './src/index.js';
  const cfg = config(env);
  cfg.entry = entry;
  return (cfg);
};

Whatever entry we specify, we will always associate it with our entrypoint: index.js.

The index.js file is what is reponsible of loading the template and embedding it into our email layout.

Check out the full webpack.config.js for more information.

A dynamic template

This is the content of the index.js file:

import React, { PureComponent } from 'react';
import ReactDOM from 'react-dom';
import { Box, Item } from 'react-html-email';

/*
 * The EmailContainer will be the body of your Email HTML body.
 * It will receive the right template to load and inject from Webpack
 * and will attempt to load it here and include it in the DOM.
 * 
 * This class expects the EMAIL_TEMPLATE const to be defined.
 */
class EmailContainer extends PureComponent {
  render() {
    // EMAIL_TEMPLATE is defined by the webpack configuration and enables us
    // to include the right template at compile time
    const Template = require(`./templates/${EMAIL_TEMPLATE}`).default;
    return (
      <Box width="600px" height="100%" bgcolor='#f3f3f3' align='center'>
        <Item align='center' valign='top'>
          <Template />
        </Item>
      </Box>
    );
  }
}

ReactDOM.render(<EmailContainer />, document.body);

This is where the magic happens. Every template will be embedded into this code that will produce the final HTML output for the email. As you can see, I am importing a couple of components from the convenient react-html-email project that takes care of providing a few useful components for the email world.

The Template object is dynamically loaded from the EMAIL_TEMPLATE const string that’s expected to be defined when this code is executed. We’re able to do this because we’re using the webpack DefinePlugin:

// the DefinePlugin helps us defining a const
// variable that will be 'visible' by the JS code
// we are rendering at runtime
new webpack.DefinePlugin({
  "EMAIL_TEMPLATE": JSON.stringify(env.entry),
}),

The plugin will take care of setting the const to the full path of the email template I am interested in rendering. In our case the HelloWorld template.

Run it

yarn webpack --env.entry=HelloWorld

The resulting HTML will be stored in the output folder.

browser window html hello world
hello world template rendered result

I know. Not the most beautiful email, but I’ll leave the design to you. 🙂

I hope you enjoyed the post. Let me know if you have any feedback. Don’t forget to check out the full project on GitHub.

Autocomplete engine in Go: let’s build it

Reading Time: 4 minutesSome time ago I worked on a small autocomplete web service for fun and curiosity. Part of it consists pretty much of what I’m going to talk about in this post.

We’re gonna build a small completion suggester in Go.

A couple of assumptions I’ll have for this experiment are:

  • we don’t care about sub-word matching
  • we want a little typo tolerance
  • we’ll do case-insensitive matching

The prefix-based map

We’re gonna take advantage of PrefixMap, a small map implementation of mine that helps with prefix-based key matching. You may have seen this kind of map called Radix Tree, Trie or Prefix Tree as specified on Wikipedia, but I couldn’t find a naming convention for this kind of tree in the Go land. Maybe you can suggest a better name in the comments.

Anyway, the prefix-based map will be our main data source for suggestions. We’ll use it to store all the suggestions we want to match through our engine.

The Levenshtein distance

We’ll use the Levenshtein distance as a metric to compute how far the typed in string is from the matches that we’ve found. In particular, we’ll define our own similarity metric as:

[code]
levenshtein(match,substr)
1.0 – —————————
max(|match|,|substr|)
[/code]

Where substr is the typed in string and match is the candidate match we have found.

A similarity of 1.0 means that substr and match are equal.

The problem

We have a list of strings that the input will potentially match against. We want to find a few candidate matches that honor a certain similarity threshold we define.

This is a very simplified version of what happens when you start typing text into an auto-completion-enabled input field. Every time you type in, the string you’ve typed in so far gets evaluated against a data source to find the most relevant suggestions for you. And all of this has to happen pretty fast to be effective. You want your auto-completion service to be faster than the user who’s typing so that he can save time and select the suggestion rather than typing it all.

For this particular reason, the implementation of the Prefix Map we’re gonna use will be able to efficiently find all the values for the given prefix. This will save us from having to populate a more traditional map with all possible prefixes for a given key in advance. Instead, thanks to the tree-like structure values will be stored into the map, we’ll be able to just traverse the values in the tree that share a common prefix.

An example solution

For this specific example, our data source is going to be a list of world countries. Our target users will have to select the right country so they will start typing it in and we’ll provide a few suggestions to save them typing.

The autocomplete code

First of all, let’s start from the low-hanging fruits.

We’ve defined our concept of similarity so we’ll start by writing a function for it.

As you can see, the function accepts the ld parameter as one of the inputs. That’s the Levenshtein Distance computed between the words we want to know the similarity of.

Now, let’s populate our PrefixMap with the country list we have. For the purpose of the exercise I’m gonna use the list of country names in English provided by this online service that I’ve turned into a Go slice. You can find a gist for it here.

Now that we have everything we need, let’s work on the main component of our little program: the matching code.

...
        values := datasource.GetByPrefix(strings.ToLower(input))
	results := make([]*Match, 0, len(values))
	for _, v := range values {
		value := v.(string)
		s := similarity(len(value), len(input), LevenshteinDistance(value, input))
		if s >= similarityInput {
			m := &Match{value, s}
			results = append(results, m)
		}
	}

	fmt.Printf("Result for target similarity: %.2f\n", similarityInput)
	PrintMatches(results)
...

We’re taking advantage of the GetByPrefix method from PrefixMap. GetByPrefix will return a flattened collection of all the values in the map that belong to the specified prefix. Pretty handy, isn’t it?

A further filtering I’m applying there, as you can see, is the similarity verification step. I’m going through the list of matches that we have retrieved to filter them according to the similarity input we have received as input in our program.

You can find the full example implementation on GitHub.

The output

This is really it. Here’s a few example invocations of the little code we’ve just written:

$ go run autocompleter.go -similarity 0.3 itlaly

Result for target similarity: 0.30
match: 	Italy	similarity: 0.67
$ go run autocompleter.go -similarity 0.2 united

Result for target similarity: 0.20
match: 	United Arab Emirates	similarity: 0.25	
match: 	United Kingdom	similarity: 0.36	
match: 	United States	similarity: 0.38
$ go run autocompleter.go France

Result for target similarity: 0.30
match: 	France	similarity: 1.00	
match: 	Metropolitan France	similarity: 0.32	

 

Further improvements

As you have seen, our little program allows for little typos in some situations, except when it occurs at the beginning of our input. This is because we’re using a PrefixMap which will not match anything at all if we start with the wrong prefix, of course. An improvement in this sense would probably be to fall back to a full-text search when no matches have been found for the specified similarity in the first pass.

Hopefully this post has been somewhat useful for you, or at least entertaining. Let me know if you need any clarification or if you have to recommend a better approach to achieve this.

Gogoa – Cocoa bindings for Go

Reading Time: 2 minutesI don’t actually know why but I was wondering how easy it is to integrate Cocoa with Go. Well [SPOILER] looks like it’s super easy!
The first comfortable piece I encountered, actually, was Go 1.3 changelog where it states:

Finally, the go command now supports packages that import Objective-C files (suffixed .m) through cgo.

Since I’m now with Go 1.4.2 I thought: “This should be even easier now!”. Reading through the cgo reference looks like interoperability is getting stronger with Go, despite this comment on SO.

Hahaha… Go’s biggest weakpoint… interoperability. –  Matt JoinerJun 12 ’11 at 13:54

Gimme the code

I know, too much words and no code so far. Turns out that showing a Cocoa window with Go may be as easy as the following:

If you think this is too clean to be true you are kind of right. I actually started this project on GitHub which performs all the ugly bits underneath. I’d really love you to help me get this a little further.

How does it work?

There’s this go tool called cgo which does most of the magic by itself when you build a hybrid project like this. The reference is rather comprehensive but I’d like to highlight a few issues I encountered while wrapping the code up.

1. Keep import “C” after the C statements

Let’s take my application.go as an example:

As you can see import "C" is preceded by three directives as per the cgo standard. Make sure the import "C" directive is right after the actual C directives otherwise your code won’t build. This is because C is a pseudo package generated also using the code you provide through those preceding directives.

2. Always specify the compiler directives

Every file exposing the Objective-C bindings should specify the following flags at least.

// #cgo CFLAGS: -x objective-c
// #cgo LDFLAGS: -framework Cocoa

//#include "g_application.h"

import "C"

Otherwise you would end up in a long long list of compilation errors. At least this is what happened to me.

3. Always void*

If you end up working with pointers to ObjectiveC classes always convert them into void* before returning them back to Go. If you don’t, passing them back to C/ObjectiveC might become painful or you may end up with errors like the following:

struct size calculation error off=8 bytesize=0

At least this is what occurred to me. You may help me see the light with this.

4. Keep the non-Go code in separate files

Although cgo allows for the whole non-Go code to go in the comments, please, don’t do that. You’ll end up in unreadable code. Check out my repository for a possible solution.

Now what?

I don’t know actually. It really took me 1 hour to get this code compiling and running just for the sake of seeing a Cocoa Window. Didn’t expect much more.
Also, I’ve been doing some work with bindings in the past (Qt – Android – JNI, Cocoa – Qt – C++) and it always gets painful when the main framework is not run in the main thread. I don’t know if this will ever the case with Go. I’m not even sure how far this can be pushed especially when it’s about goroutines and defer‘ed stuff. But, despite not so pleasant past experiences I’d like experimenting this with Go and Cocoa as well.

A question for you

How would you write tests for those bindings?

Please, contribute

If you are any interested in this project, please, shout out loud. That would really help me experiment a bit more. Even if you just want to complain about something terribly wrong I did, please, do!

Cheers.

A native iOS app with a Qt third-party library

Reading Time: 4 minutesI’m wrapping up here pieces of information that I used to setup a working native iOS project taking advantage of a library I wrote mainly using Qt. The library I wrote has nothing to do with GUI, it just helps me dealing with connectivity, REST API interaction, file system access, compression… Also, the library was born for the Desktop but the porting effort to mobile is feasible due to the lack of GUI interaction.
In particular, the following has been tested using Qt 5.3.

Despite the huge integration level now Qt has reached over the year even on the iOS platform, I still prefer having the UI developed using the native SDK. I’m a rather experienced Cocoa developer, not that experienced with Cocoa Touch, but I managed to have everything I needed working. Since the information I found is rather sparse regarding this topic, I thought it could be nice to have everything here in a blog post.

Project info

In the following post I’m assuming you want to link a native iOS Xcode project to a third party library written using C++ and Qt. I’m also assuming you are using statically linked Qt for this project (which is the default for the iOS solution).

So what we have:

  • A native Xcode project for iOS: NativeHelloWorld.xcodeproj
  • A static lib Qt project: FancyStaticLib.pro

What we are going to have

  • NativeHelloWorld.xcodeproj
  • FancyStaticLib.xcodeproj as a subproject to NativeHelloWorld
  • Qt for iOS properly linked to make NativeHelloWorld run and use FancyStaticLib

Let’s get things started

So, first of all let’s instruct Xcode about where to find Qt on our system.
Go to Xcode -> Preferences -> Locations (Tab) -> Source Trees (Sub Tab).
Add the path to your Qt for iOS packages and name it as you wish. I just chose QTLIB_IOS.
Screen Shot 2014-12-18 at 18.25.05

I like setting my paths this way in order to keep my project as “exportable” as possible. This way, other devs can join my project pretty easily.

Now, if you haven’t already, you should create an Xcode project for your static library you want to link into your native iOS project.
In order to do so you have to run something like this:

/path/to/your/Qt/ios/bin/qmake -spec macx-xcode -r /path/to/FancyStaticLib.pro CONFIG+=staticlib CONFIG+=whatever_you_need

This will output FancyStaticLib.xcodeproj file for your static library. You can drag it to your our NativeHelloWorld.xcodeproj inside Xcode and add its product (the static lib) as a link dependency to your project.
NOTE: You will have to re-generate the FancyStaticLib.xcodeproj each time you change your static library .pro file

Link to Qt

Now that we have the project feeling more like a completely native Xcode one we have to set a few things up in order to easily keep developing directly from Xcode through our NativeHelloWorld.xcodeproj

First of all, look for the Headers Search path section in the Build Settings section of your Xcode project:
Screen Shot 2014-12-18 at 18.54.07

We want to make it easy for Xcode to find Qt headers, and also our static lib headers.
Now the variable we previously defined through the Source Trees section in the Xcode preferences comes in handy.
Let’s add the following to the Headers Search Path section:

  • $(QTLIB_IOS)/include
  • /path/to/FancyStaticLib/headers

Now, the actual linker flags.
You will probably need to start your project inside the emulator. When doing so bare in mind that the simulator has a different architecture from your iOS device. Simulator runs an i386 architecture and we want to link our project both to the static Qt lib files compiled for such architecture and for arm. This way we will be able to run our project both in the simulator and on the native device.

Scroll down to the Other Linker Flags section: you should at least have a Debug section. Under Debug, as a child item, you should have Any iOS Simulator SDK. If you don’t, click the little “+” icon on the side of the Debug item and add Any iOS Simulator SDK as child item.Screen Shot 2014-12-18 at 19.06.30

Our project dependencies are satisfied by the following modules:

  • QtCore
  • QtGui
  • QtNetwork
  • QtScript

The Debug section will host the option for running our app on a native device with debug symbols:

-L$(QTLIB_IOS)/lib -lQt5Core_debug -lQt5Gui_debug -lQt5Network_debug -lQt5Script_debug
Also, don’t forget to include proper platform support with:
-lQt5PlatformSupport_debug -L$(QTLIB_IOS)/plugins/ -lqios_debug
You’ll also need -lz -lqtharfbuzzng_debug most probably.
Also, if you are taking advantage of the bearer plugin to handle connectivity, add the following:
-L$(QTLIB_IOS)/plugins/bearer -lqgenericbearer_debug

Now the Any iOS Simulator SDK section:
Simply replace what you typed in the previous section changing “_debug” with “_iphonesimulator_debug” and you are good to go.

The last touch

Your Qt lib will most probably need an instance of QGuiApplication. This usually requires you to replace the default main coming with your project template with a custom one that actually calls QGuiApplication::exec(). Luckily, Qt has made things relatively easy and you won’t need a custom main body. Looks like the Qt guys are cool enough to inject their Qt Event Dispatcher inside the main CocoaTouch run loop making it easy to spawn QTimers and queued methods invocations from Objective-C(++).
Just make sure you initialize a QGuiApplication instance (but you won’t need to call .exec()).

We are going to add the following piece of code inside your application delegate - (BOOL)application:(UIApplication *)application willFinishLaunchingWithOptions:(NSDictionary *)launchOptions after having renamed the application delegate file from .m to .mm. Renaming to .mm enables Objective-C++ which helps us mix C++ and Objective-C in the same source.

Conclusions

This is pretty much what is needed to mix a native iOS project with a Qt library. If you encounter linker issues you should be able to easily address them by inspecting the .a symbols you can find inside your $(QTLIB_IOS)/{lib,plugins} directories.
Otherwise please post here your issues so that we can try address them together.

Cheers.

Workaround Windows Tray Area Item Preference

Reading Time: 4 minutes

Introduction

I’m not an experienced Windows developer.
I had to make it clear 🙂

Today I had the chance to implement kind of a nasty hack on Windows.
I had to make my application tray icon always visibile, at least by default. I swear I honor user preference then. I know this is one of those don’tswhen working with Windows API since it is clearly stated in the docs developers have no control over the notification area. But sometimes you feel it deep in your heart that your application user experience would benefit a lot from making your tiny tray icon visible by default. This was my case, and as I can see on the internet, this is the case of a lot of apps out there.

I just wanted to write this down as an excercise to help me get back on sharing what I code (not always feasible, though).

There’s plenty of information out there, it’s just you won’t find a single pice of it and you’ll have to digg a lot before being able to workaround this limitation over the tray area. At least this was my experience as a non-experienced Windows developer.

Now that my reasons have been stated clear we can go ahead and see some code.

Let’s get started

So, there’s this incredible resource by Geoff Chappell which you should check if you want to know more about some undocumented/private APIs on Windows. It looks he has done a huge amount of work around the notification area documenting well enough how to workaround the default limitations exposed by the documented API.

As he states here explorer.exe exposes an ITrayNotify implementation through COM. This interface can be used to know user preferences and status regarding the notification area items. And you of course can also use it to modify such preferences.

So what we are going to do now is requesting the implementation through the canonical CoCreateInstance passing in the required CLSID information. Such information is retrievable from the docs by Geoff. But you can also look for ITrayNotify through regedit.exe in order to find the needed CLSID.

Bringing a few pieces together, here is a quick recap of the declarations you’ll need to make the CoCreateInstance call succeed.

[sourcecode lang=”cpp”]

#ifndef __ITrayNotify_INTERFACE_DEFINED__
#define __ITrayNotify_INTERFACE_DEFINED__

class __declspec(uuid(&quot;FB852B2C-6BAD-4605-9551-F15F87830935&quot;)) ITrayNotify : public IUnknown
{
public:
virtual HRESULT __stdcall
RegisterCallback(INotificationCB* callback) = 0;
virtual HRESULT __stdcall
SetPreference(const NOTIFYITEM* notify_item) = 0;
virtual HRESULT __stdcall EnableAutoTray(BOOL enabled) = 0;
};
#endif // #ifndef __ITrayNotify_INTERFACE_DEFINED__

const CLSID CLSID_TrayNotify = {
0x25DEAD04,
0x1EAC,
0x4911,
{0x9E, 0x3A, 0xAD, 0x0A, 0x4A, 0xB5, 0x60, 0xFD}};

[/sourcecode]

This is enough for requesting the instance through CoCreateInstance. Unfortunately, as I discovered testing my own code, this won’t work on Windows 8 where apparently this private API has changed. You know, this is the drawback of using private APIs :).
Anyway, I spent the day looking for the solution and fortunately I found the appropriate interface also for Windows 8. You can find the same information by running OllyDbg against explorer.exe on Windows 8.

[sourcecode lang=”cpp”]
class __declspec(uuid(&quot;D133CE13-3537-48BA-93A7-AFCD5D2053B4&quot;)) ITrayNotifyWindows8 : public IUnknown
{
public:
virtual HRESULT __stdcall
RegisterCallback(INotificationCB* callback, unsigned long*) = 0;
virtual HRESULT __stdcall UnregisterCallback(unsigned long*) = 0;
virtual HRESULT __stdcall SetPreference(NOTIFYITEM const*) = 0;
virtual HRESULT __stdcall EnableAutoTray(BOOL) = 0;
virtual HRESULT __stdcall DoAction(BOOL) = 0;
};
[/sourcecode]

Getting the instance to the appropriate ITrayNotify interface, though, is not enough. We are going to use another private interface, called INotificationCB, which will help us get the current information regarding our notification item.

So let’s write down our little helper class that will take care of modifying the preferences for our notification item.

[sourcecode lang=”cpp”]
// TinyTrayHelper.h

class TinyTrayHelper : public INotificationCB
{
public:
TinyTrayHelper(NOTIFYICONDATA* nid);
virtual ~TinyTrayHelper();

HRESULT __stdcall Notify(ULONG, NOTIFYITEM *) __override;

bool ensureTrayItemVisible();

ULONG __stdcall AddRef(void) __override;
ULONG __stdcall Release(void) __override;
HRESULT __stdcall QueryInterface(REFIID riid, void **ppvObject) __override;

private:
NOTIFYICONDATA *_nid;
NOTIFYITEM _nit;
wchar_t _exeName[MAX_PATH];
};
[/sourcecode]

Now let’s see the actual implementation for our helper.

[sourcecode lang=”cpp”]
#include &quot;trayhelper.h&quot;

#include &lt;sdkddkver.h&gt;
#include &lt;VersionHelpers.h&gt;

#include &lt;stdio.h&gt;

static void* CreateTrayNotify(bool win8);

TinyTrayHelper::TinyTrayHelper(NOTIFYICONDATA *nid) :
_nid(nid),
_win8(false)
{
CoInitialize(NULL);

::GetModuleFileName(NULL, _exeName, MAX_PATH);

// here we prepare the NOTIFYITEM instance
// that is required to change settings
_nit.exe_name = _exeName;
_nit.guid = _nid-&gt;guidItem;
_nit.hwnd = _nid-&gt;hWnd;
_nit.icon = _nid-&gt;hIcon;
}

TinyTrayHelper::~TinyTrayHelper()
{
}

HRESULT __stdcall TinyTrayHelper::Notify(ULONG, NOTIFYITEM *item)
{
if (item-&gt;hwnd != _nid-&gt;hWnd || item-&gt;guid != _nid-&gt;guidItem) {
// this is a notification about an item that is not ours
// so let’s just ignore it
return S_OK;
}

_nit = NOTIFYITEM(*item);

return S_OK;
}

ULONG __stdcall TinyTrayHelper::AddRef(void)
{
return 1;
}

ULONG __stdcall TinyTrayHelper::Release(void)
{
return 1;
}

HRESULT __stdcall TinyTrayHelper::QueryInterface(REFIID riid, void **ppvObject)
{
if (ppvObject == NULL) return E_POINTER;

if (riid == __uuidof(INotificationCB)) {
*ppvObject = (INotificationCB*)this;
} else if (riid == IID_IUnknown) {
*ppvObject = (IUnknown *) this;
} else {
return E_NOINTERFACE;
}

AddRef();
return S_OK;
}

bool TinyTrayHelper::ensureTrayItemVisible()
{
const bool win8 = IsWindows8OrGreater();
void *trayNotify = CreateTrayNotify();
if (!trayNotify) {
return false;
}

HRESULT hr;
if (win8) {
auto *win8TrayNotify = static_cast&lt;ITrayNotifyWin8*&gt;(trayNotify);
unsigned long callback_id = 0;
// this is synchronous
hr = win8TrayNotify-&gt;RegisterCallback(static_cast&lt;INotificationCB*&gt;(this), &amp;callback_id);
hr = win8TrayNotify-&gt;UnregisterCallback(&amp;callback_id);
} else {
hr = ((ITrayNotify*)trayNotify)-&gt;RegisterCallback(static_cast&lt;INotificationCB*&gt;(this));
hr = ((ITrayNotify*)trayNotify)-&gt;RegisterCallback(NULL);
}

if (FAILED(hr)) {
((IUnknown*)trayNotify)-&gt;Release();
return false;
}

// now we should have an up-to-date information
// about our notification icon item

if (_nit.preference != 0x01) { // this means always hide, so we honor user preference
_nit.preference = 0x02;

if (_win8) {
((ITrayNotifyWin8*)trayNotify)-&gt;SetPreference(&amp;_nit);
} else {
((ITrayNotify*)trayNotify)-&gt;SetPreference(&amp;_nit);
}
}
((IUnknown*)trayNotify)-&gt;Release();
}

static void* CreateTrayNotify(bool win8)
{
CLSID iTrayNotifyCLSID;
if (win8) {
iTrayNotifyCLSID = __uuidof(ITrayNotifyWindows8); // the interface we defined previously
} else {
iTrayNotifyCLSID = __uuidof(ITrayNotify);
}

void *trayNotify;
HRESULT hr = CoCreateInstance (
CLSID_TrayNotify,
NULL,
CLSCTX_LOCAL_SERVER,
iTrayNotifyCLSID,
(PVOID *) &amp;trayNotify);

if (hr == S_OK) {
return trayNotify;
} else {
printf(&quot;Cannot get reference to ITrayNotify instance\n&quot;);
}

return NULL;
}

[/sourcecode]

I see what you did there

So, TinyTrayHelper basically does 4 things here:

  1. Creates a NOTIFYITEM instance based on a NOTIFICATIONDATA instance
  2. Chooses the appropriate ITrayNotify instance based on the current OS
  3. Registers itself as an instace of INotificationCB to receive the relevant information inside the Notify method
  4. Finally calls SetPreference to change the preference regarding the notification area item

What now?

What you need now is just to create an instance of our TinyTrayHelper and pass in your NOTIFICATIONDATA instance reference. Then call ensureTrayIconVisible to change the notification area preference regarding your item.

Please note that I adapted a more complex code to build this example so I didn’t test this code specifically. Use at your own risk.

I hope this will be useful to you. Please, let me know if I made tremendous mistakes here, I’ll try to fix!

Cheers.