Author Archives: Arno

QtGamepad ported to Qt 6

For a project of mine I need gamepad support. In the past, I’ve happily used QtGamepad, but that has not been ported to Qt 6. It’s not dead, but Andy (QtGamepad’s maintainer) wants to do some re-architecting for a Qt 6 release.

I need QtGamepad now, however, so I’ve ported it myself. It’s not a whole lot of code and Qt’s pro2cmake.py made it a breeze. I’ve renamed the whole thing to QtGamepadLegacy and pushed it to GitHub. So whenever the official QtGamepad is released there should be no naming conflicts. I’ve tested with Qt 6.6.0 and the evdev plugin.

I don’t plan on adding any new features to the port. I’ll try to keep it compatible with upcoming Qt releases, though.

Pitfalls of lambda capture initialization

Recently, I’ve stumbled across some behavior of C++ lambda captures that has, initially, made absolutely no sense to me. Apparently, I wasn’t alone with this, because it has resulted in a memory leak in QtFuture::whenAll() and QtFuture::whenAny() (now fixed; more on that further down).

I find the corner cases of C++ quite interesting, so I wanted to share this. Luckily, we can discuss this without getting knee-deep into the internals of QtFuture. So, without further ado:

Time for an example

Consider this (godbolt):

#include <iostream>
#include <functional>
#include <memory>
#include <cassert>
#include <vector>

struct Job
{
    template<class T>
    Job(T &&func) : func(std::forward<T>(func)) {}

    void run() { func(); hasRun = true; }

    std::function<void()> func;
    bool hasRun = false;
};

std::vector<Job> jobs;

template<class T>
void enqueueJob(T &&func)
{
    jobs.emplace_back([func=std::forward<T>(func)]() mutable {
        std::cout << "Starting job..." << std::endl;
        // Move func to ensure that it is destroyed after running
        auto fn = std::move(func);
        fn();
        std::cout << "Job finished." << std::endl;
    });
}

int main()
{
    struct Data {};
    std::weak_ptr<Data> observer;
    {
        auto context = std::make_shared<Data>();
        observer = context;
        enqueueJob([context] {
            std::cout << "Running..." << std::endl;
        });
    }
    for (auto &job : jobs) {
        job.run();
    }
    assert((observer.use_count() == 0) 
                && "There's still shared data left!");
}

Output:

Starting job...
Running...
Job finished.

The code is fairly straight forward. There’s a list of jobs to which we can be append with enqueueJob(). enqueueJob() wraps the passed callable with some debug output and ensures that it is destroyed after calling it. The Job objects themselves are kept around a little longer; we can imagine doing something with them, even though the jobs have already been run.
In main(), we enqueue a job that captures some shared state Data, run all jobs, and finally assert that the shared Data has been destroyed. So far, so good.

Now you might have some issues with the code. Apart from the structure, which, arguably, is a little forced, you might think “context is never modified, so it should be const!”. And you’re right, that would be better. So let’s change it (godbolt):

--- old
+++ new
@@ -34,7 +34,7 @@
     struct Data {};
     std::weak_ptr<Data> observer;
     {
-        auto context = std::make_shared<Data>();
+        const auto context = std::make_shared<Data>();
         observer = context;
         enqueueJob([context] {
             std::cout << "Running..." << std::endl;

Looks like a trivial change, right? But when we run it, the assertion fails now!

int main(): Assertion `(observer.use_count() == 0) && "There's still shared data left!"' failed.

How can this be? We’ve just declared a variable const that isn’t even used once! This does not seem to make any sense.
But it gets better: we can fix this by adding what looks like a no-op (godbolt):

--- old
+++ new
@@ -34,9 +34,9 @@
     struct Data {};
     std::weak_ptr<Data> observer;
     {
-        auto context = std::make_shared<Data>();
+        const auto context = std::make_shared<Data>();
         observer = context;
-        enqueueJob([context] {
+        enqueueJob([context=context] {
             std::cout << "Running..." << std::endl;
         });
     }

Wait, what? We just have to tell the compiler that we really want to capture context by the name context – and then it will correctly destroy the shared data? Would this be an application for the really keyword? Whatever it is, it works; you can check it on godbolt yourself.

When I first stumbled across this behavior, I just couldn’t wrap my head around it. I was about to think “compiler bug”, as unlikely as that may be. But GCC and Clang both behave like this, so it’s pretty much guaranteed not to be a compiler bug.

So, after combing through the interwebs, I’ve found this StackOverflow answer that gives the right hint: [context] is not the same as [context=context]! The latter drops cv qualifiers while the former does not! Quoting cppreference.com:

Those data members that correspond to captures without initializers are direct-initialized when the lambda-expression is evaluated. Those that correspond to captures with initializers are initialized as the initializer requires (could be copy- or direct-initialization). If an array is captured, array elements are direct-initialized in increasing index order. The order in which the data members are initialized is the order in which they are declared (which is unspecified).

https://en.cppreference.com/w/cpp/language/lambda

So [context] will direct-initialize the corresponding data member, whereas [context=context] (in this case) does copy-initialization! In terms of code this means:

  • [context] is equivalent to decltype(context) captured_context{context};, i.e. const std::shared_ptr<Data> captured_context{context};
  • [context=context] is equivalent to auto capture_context = context;, i.e. std::shared_ptr<Data> captured_context = context;

Good, so writing [context=context] actually drops the const qualifier on the captured variable! Thus, for the lambda, it is equivalent to not having written it in the first place and using direct-initialization.

But why does this even matter? Why do we leak references to the shared_ptr<Data> if the captured variable is const? We only ever std::move() or std::forward() the lambda, right up to the place where we invoke it. After that, it goes out of scope, and all captures should be destroyed as well. Right?

Nearly. Let’s think about the compiler generates for us when we write a lambda. For the direct-initialization capture (i.e. [context]() {}), the compiler roughly generates something like this:

struct lambda
{
    const std::shared_ptr<Data> context;
    // ...
};

This is what we want to to std::move() around. But it contains a const data member, and that cannot be moved from (it’s const after all)! So even with std::move(), there’s still a part of the lambda that lingers, keeping a reference to context. In the example above, the lingering part is in func, the capture of the wrapper lambda created in enqueueJob(). We move from func to ensure that all captures are destroyed when the it goes out of scope. But for the const std::shared_ptr<Data> context, which is hidden inside func, this does not work. It keeps holding the reference. The wrapper lambda itself would have to be destroyed for the reference count to drop to zero.
However, we keep the already-finished jobs around, so this never happens. The assertion fails.

How does this matter for Qt?

QtFuture::whenAll() and whenAny() create a shared_ptr to a Context struct and capture that in two lambdas used as continuations on a QFuture. Upon completion, the Context stores a reference to the QFuture. Similar to what we have seen above, continuations attached to QFuture are also wrapped by another lambda before being stored. When invoked, the “inner” lambda is supposed to be destroyed, while the outer (wrapper) one is kept alive.

In contrast to our example, the QFuture situation had created an actual memory leak, though (QTBUG-116731): The “inner” continuation references the Context, which references the QFuture, which again references the continuation lambda, referencing the Context. The “inner” continuation could not be std::move()d and destroyed after invocation, because the std::shared_ptr data member was const. This had created a reference cycle, leaking memory. I’ve also cooked this more complex case down to a small example (godbolt).

The patch for all of this is very small. As in the example, it simply consists of making the capture [context=context]. It’s included in the upcoming Qt 6.6.0.

Bottom line

I seriously didn’t expect there to be these differences in initialization of by-value lambda captures. Why doesn’t [context] alone also do direct- or copy-initialization, i.e. be exactly the same as [context=context]? That would be the sane thing to do, I think. I guess there is some reasoning for this; but I couldn’t find it (yet). It probably also doesn’t make a difference in the vast majority of cases.

In any case, I liked hunting this one down and getting to know another one of those dark corners of the C++ spec. So it’s not all bad 😉.

New client languages for Qt WebChannel

At the company I’m working at, we’re employing Qt WebChannel for remote access to some of our software. Qt WebChannel was originally designed for interfacing with JavaScript clients, but it’s actually very well suited to interface with any kind of dynamic language.

We’ve created client libraries for a few important languages with as few dependencies as possible: pywebchannel (Python, no dependencies), webchannel.net (.NET/C#, depends on JSON.NET) and webchannel++ (header-only C++14, depends on Niels Lohmann’s JSON library).

Python and .NET are a pretty good match: Their dynamic language features make it possible to use remote methods and properties like they were native. Due to being completely statically typed, C++ makes the interface a little more clunky, although variadic templates help a lot to make it easier to use.

As with the original Qt WebChannel C++ classes, transports are completely user defined. When sensible, a default implementation of a transport is provided.

Long story short, here’s an example of how to use the Python client. It’s designed to talk with the chatserver example of the Qt WebChannel module over a WebSocket. It even supports the asyncio features of Python 3! Relevant excerpt without some of the boilerplate:

async def run(webchannel):
    # Wait for initialized
    await webchannel
    print("Connected.")

    chatserver = webchannel.objects["chatserver"]

    username = None

    async def login():
        nonlocal username
        username = input("Enter your name: ")
        return await chatserver.login(username)

    # Loop until we get a valid username
    while not await login():
        print("Username already taken. Please enter a new one.")

    # Keep the username alive
    chatserver.keepAlive.connect(lambda *args: chatserver.keepAliveResponse(username))

    # Connect to chat signals
    chatserver.newMessage.connect(print_newmessage)
    chatserver.userListChanged.connect(lambda *args: print_newusers(chatserver))

    # Read and send input
    while True:
        msg = await ainput()
        chatserver.sendMessage(username, msg)


print("Connecting...")
loop = asyncio.get_event_loop()
proto = loop.run_until_complete(websockets.client.connect(CHATSERVER_URL, create_protocol=QWebChannelWebSocketProtocol))
loop.run_until_complete(run(proto.webchannel))

pywebchannel can also be used without the async/await syntax and should also be compatible with Python 2.

I would also really like to push the code upstream, but I don’t know when I’ll have the time to spare. Then there’s also the question of how to build and deploy the libraries. Would the qtwebchannel module install to $PYTHONPREFIX? Would it depend on a C# compiler (for which support would have to be added to qmake)?

In any case, I think the client libraries can come in handy and expand the spectrum of application of Qt WebChannel.

QRPC: A Qt remoting library

This project of mine has been around for quite a while already. I’ve never much publicised it, however, and for the past year the code hasn’t seen any changes (until a few days ago, anyway). But related to my current job, I’ve found a new need for remote procedure calls, so I came back to the code after all.

QRPC is a remoting library tightly integrated with Qt: in simplified terms, it’s an easy way to get signal-slot connections over the network. At least that was its original intention, and hence comes the name (QRPC as in “Qt Remote Procedure Call”).

But actually, it’s a little more than that. QRPC pipes the whole of Qt’s meta-object functionality over a given transport. That can be a network socket, unix domain socket or something more high-level like ZeroMQ. But it could, for example, also be a serial port (not sure why anybody would want that, though).
The actual remoting system works as follows: A server marks some objects for “export”, meaning they will be accessible by clients. Clients can then request any such object by its identifier. The server then serialises the whole QMetaObject hierarchy of the exported object and sends it to the client. There, it is reconstructed and used as the dynamic QMetaObject of a specialised QObject. Thus, the client not only has access to signals and slots, but also to properties and Q_CLASSINFOs (actually anything that a QMetaObject has to offer). The property support also includes dynamic properties, not only the statically defined QMetaProperties.
Method invocations, signal emissions and property changes are serialised and sent over the transport – after all, a QMetaObject without the dynamics isn’t that useful ;).

To give an impression of how QRPC is used, here’s an excerpt from the example included in the repository:

Server:

Widget::Widget(QWidget *parent) :
    QWidget(parent), ui(new Ui::Widget),
    localServer(new QLocalServer(this)),
    exporter(new QRPCObjectExporter(this))
{
    ui->setupUi(this);

    // Export the QSpinBox with the identifier "spinbox".
    exporter->exportObject("spinbox", ui->spinBox);

    // Handle new connections, serve under the name "qrpcsimpleserver".
    connect(localServer, &QLocalServer::newConnection, this, &Widget::handleNewConnection);
    localServer->listen("qrpcsimpleserver");
}

void Widget::handleNewConnection()
{
    // Get the connection socket
    QLocalSocket *socket = localServer->nextPendingConnection();

    // Create a transport...
    QRPCIODeviceTransport *transport = new QRPCIODeviceTransport(socket, socket);

    // ... and the server communicating over the transport, serving objects exported from the
    // exporter. Both the transport and the server are children of the socket so they get
    // properly cleaned up.
    new QRPCServer(exporter, transport, socket);
}

Client:

Widget::Widget(QWidget *parent) :
    QWidget(parent),
    ui(new Ui::Widget), socket(new QLocalSocket(this)), rpcClient(0)
{
    // ...
    // Connect to the server on button click
    connect(ui->connect, &QPushButton::clicked, [this]() {
        socket->connectToServer("qrpcsimpleserver");
    });

    // Handle connection events
    connect(socket, &QLocalSocket::connected, this, &Widget::connectionEstablished);
    connect(socket, &QLocalSocket::disconnected, this, &Widget::connectionLost);
}

void Widget::connectionEstablished()
{
    // Create a transport...
    QRPCIODeviceTransport *transport = new QRPCIODeviceTransport(socket, this);
    // ... and a client communicating over the transport.
    QRPCClient *client = new QRPCClient(transport, this);

    // When we receive a remote object (in this example, it can only be the spinbox),
    // synchronise the state and connect the slider to it.
    connect(client, &QRPCClient::remoteObjectReady, [this](QRPCObject *object) {
        ui->horizontalSlider->setValue(object->property("value").toInt());
        ui->horizontalSlider->setMinimum(object->property("minimum").toInt());
        ui->horizontalSlider->setMaximum(object->property("maximum").toInt());

        connect(ui->horizontalSlider, SIGNAL(valueChanged(int)), object, SLOT(setValue(int)));
    });

    // Request the spinbox.
    client->requestRemoteObject("spinbox");

    // Clean up when we lose the connection.
    connect(socket, &QLocalSocket::disconnected, client, &QObject::deleteLater);
    connect(socket, &QLocalSocket::disconnected, transport, &QObject::deleteLater);

    // ...
}

Argument or property types have to be registered metatypes (same as when using queued connections) and serialisable with QDataStream. The QDataStream operators also have to be registered with Qt’s meta type system (cf. QMetaType docs).
At the moment, one shortcoming is that references to QObjects (i.e. “QObject*”) can’t be transferred over the network, even if the relevant QObjects are “exported” by the server. Part of the problem is that Qt doesn’t allow us to register custom stream operators for “QObject*”. This could be solved by manually traversing the serialised values, but that will negatively impact the performance. Usually one can work around that limitation, though.

Furthermore, method return values are intentionally not supported. Supporting them would first require a more complex system to keep track of state and secondly impose blocking method invocations, both of which run contrary to my design goals.

In some final remarks, I’d like to point out two similar projects: For one, there’s QtRemoteObjects in Qt’s playground which works quite similarly to QRPC. I began working on QRPC at around the same time as QtRemoteObjects was started and I didn’t know it existed until quite some time later. Having glanced over its source code, I find it to be a little too complex and doing too much for my needs without offering the same flexibility (like custom transports). I must admit that I haven’t checked it out in more detail, though.
Then there’s also QtWebChannel which again offers much of the metaobject functionality over the web, but this time specifically geared towards JavaScript/HTML5 clients and Web apps. I thought about reusing part of its code, but ultimately its focus on JS was too strong to make this a viable path.

QuickPlot: A collection of native QtQuick plotting items

For a project at university I recently needed a plotting widget to display some data. Naturally, Qwt came to my mind. I’ve already been using it in a number of other projects and it works great.

The one drawback, however: The project was intended to be run on the Raspberry Pi. Now the X-Server on the R-Pi doesn’t have any 3D acceleration yet, so the performance of Qwt was subpar.

Luckily, Qt has the eglfs backend (using Broadcom’s EGL libraries), which works very well on the R-Pi. The problem: You have to use QML/QtQuick with that, so Qwt was out of the game (actually, I’ve played around with creating a QtQuick backend for Qwt, but with Qwt being QPainter-based, still, it was nowhere near performant enough). A more QtQuick-like solution was needed.

Since QtQuick supports the HTML5 Canvas API, I thought that maybe some HTML5 chart library like Chart.js could do the trick. The good news: It works pretty much out of the box and you get very pretty charts. The bad news: performance on the Raspberry Pi is more or less abysmal (on a laptop or desktop PC it’s fine, though). Even some hand-crafted plotting code based on the Canvas API ate the better part of the R-Pi’s CPU when displaying a continuous stream of data. This isn’t actually that much of a wonder, considering that it was all based on JavaScript and has redrawn the whole plot area in an FBO every time there was a data update.

So all of the “simple” solutions where dead ends. Eventually I ended up writing my own plot items, this time based on QtQuick’s Scene Graph to get the best performance out of the little computer. I was actually surprised that no-one had done something like this yet – or at least I couldn’t find anything.
Anyway, here’s a screenshot of the thing displaying a sine wave with axes:

The sine wave is actually scrolling through the view, always showing the most recent 300 data points of the data source. It’s even anti-aliased on most hardware, including the R-Pi, just not on my laptop’s Intel GPU on which I took the screenshot. The performance on the Raspberry Pi is now very good, using less than 5% of the CPU, while the other solutions where north of 70% for a continuously updated data source (values from top, not claiming any sort of accuracy).

The API is somewhat inspired by Qwt’s. There’s the PlotArea Item which is responsible for managing all items of the plot. It keeps a collection of PlotItems, displays the axes and makes the used ScaleEngines (used for the axis limits) known to every item and axis in the plot.
I haven’t implemented more than the most basic PlotItems and ScaleEngines at the moment (because those are currently the only things I need):

  • Curve: exactly what the name implies – a set of points connected by lines.
  • ScrollingCurve: A curve which only displays the last N data points. If the data is continuously updated, this looks as if the curve is scrolling from right to left.
  • TightScaleEngine: A ScaleEngine which sets the axis limits to the overall minima and maxima of all the plot items.
  • FixedScaleEngine: A ScaleEngine which sets the axis limits to constant values.

The axis ticks are placed based on the nicenum algorithm from the Graphics Gems book series from Academic Press. The API is designed with C++/Qt data sources in mind: Data is passed as QVector<float> or QVector<QPointF>, preferrably as an argument to a signal-slot connection. Note that the usage of QVector makes setting the data statically from QML a little difficult. I’m open for suggestions to improve this, but I would really like to avoid converting a batch of data to a QVariantList and back just to integrate better with QML/JS.

You can find the code on gitorious GitLab, licensed under the GPLv3: https://gitlab.com/qtquickplot/qtquickplot

Edit: Some example code would probably be nice, so here is the QML code used in the above screen shot:

import QtQuick 2.1
import QuickPlot 1.0

Rectangle {
    visible: true
    width: 640
    height: 480

    MouseArea {
        anchors.fill: parent
        onClicked: {
            Qt.quit();
        }
    }

    PlotArea {
        id: plotArea
        anchors.fill: parent

        yScaleEngine: FixedScaleEngine {
            max: 1.5
            min: -1.5
        }

        items: [
            ScrollingCurve {
                id: meter;
                numPoints: 300
            }
        ]
    }

    Timer {
        id: timer;
        interval: 20;
        repeat: true;
        running: true;

        property real pos: 0

        onTriggered: {
            meter.appendDataPoint( Math.sin(pos) );
            pos += 0.05;
        }
    }
}

Edit 2: Due to Gitorious being acquired by GitLab, the code has been moved to the latter: https://gitlab.com/qtquickplot/qtquickplot

Connecting C++11 lambdas to Qt4 Signals with libffi (now portable)

I know that this is probably not very useful, considering that it’s not portable and with Qt5 about to be released in June, but I found it interesting to experiment with the new language features and libffi.

The result is a class which lets you connect any function pointer or lambda, with or without bound variables, to Qt4 signals. Example of what you can do with it:

#include <QtCore>
#include <QtGui>
#include <QtDebug>

#include "lambdaconnection.h"

void printValue(int i) {
    qDebug() << "value:" << i;
}

int main(int argc, char **argv) {
    QApplication app(argc, argv);

    QSlider slider(Qt::Horizontal);
    slider.show();

    LambdaConnection::connect(&slider, SIGNAL(valueChanged(int)), &printValue);

    LambdaConnection::connect(&slider, SIGNAL(valueChanged(int)), [&app] (int i)
    {
        if (i >= 99) {
            qDebug() << "quit.";
            app.quit();
        }
    }); 

    app.exec();
}

Quite nice, I think :) You can get the source for lambdaconnection.h here.

Its main drawback is the use of pointers to member functions as ordinary function pointers. It works with gcc and maybe clang, but it’s probably going to break with other compilers and/or operating systems. UPDATE: By converting everything to std::function, we can have one template function which is called by libffi and handles any further lambda calls. No need for member-function-pointers anymore. Portability shouldn’t be an issue anymore, too.

Additionally, to avoid code duplication, it simply converts function pointers to std::function objects. This adds some overhead when calling the function pointer.

Anyway, it’s only an afternoon’s work and it was fun to code. Maybe someone will find this interesting :)

Java bindings?

Since the C# bindings are apparently not really used/wanted by anyone (except for some Windows people – but that’s really not my main development platform) I thought that maybe some more people are interested in Java bindings.

There was a poll on kdedevelopers.org two years ago that showed Java ahead of Ruby and C# – but it doesn’t seem to be really representative: there are at least some Ruby plasmoids on kde-look.org, but none in C# (which was still ahead of Ruby). Given that Trolltech/Nokia abandoned QtJambi I’m not too sure about Java bindings either.

So I’m asking the community directly before I start putting too much work into a project that noone really wants: So far we have (active) bindings projects for Python, Ruby and Perl – is there any demand for Qt/KDE Java bindings in the community? Or if not for Java, for any other language/platform?

Looking forward to your comments 🙂

And the bindings keep rocking: Writing Ruby KIO slaves

It seems like I got into a hacking frenzy after yesterday’s Ruby plugins for Kate. Today I sat down and took another look at the KRubyPluginFactory to implement KIO slave support.

So here it is, our very first Ruby Hello World KIO slave:

# kate: space-indent on; indent-width 2;

require 'korundum4'
require 'kio'

module Rubytest

  class Main < KIO::SlaveBase

    def initialize protocol, pool_sock, app_sock
      super protocol, pool_sock, app_sock
    end

    def get url
      mimeType 'text/plain'

      data Qt::ByteArray.new('Hello World from our first Ruby KIO slave!')

      finished
    end

  end

end

We’ll also need a rubytest.protocol file that describes our new protocol:

[Protocol]
exec=krubypluginfactory
input=none
output=filesystem
protocol=rubytest
reading=true

To test it, you just have to type rubytest:/ into konqi’s addressbar and you should see nice hello world greeting.

Some more info on the structure of a Ruby kio slave: the Ruby script always has to be named ‘main.rb’ and the SlaveBase derived class always ‘Main’. The module name is the camelized form of the protocol (so a protocol ‘foo_bar’ would map to a module name ‘FooBar’). The script has to reside in <kde prefix>/share/apps/kio_<protocol name>.
The .protocol file itself should be installed to <kde prefix>/share/kde4/services.

As with the Kate plugin, I packaged this example in a tarball. It ships a Makefile and can easily be installed to your home directory with ‘make install’.
I committed this feature to both trunk and the 4.5 branch – so you can soon start coding your own KIO slaves in Ruby! 🙂

Why KDE bindings simply rock (or: Creating our first Ruby plugin for Kate)

Akademy really speeds up development and communication between developers:
Yesterday I was poked by Milian Wolff and he told me how many people want to write Kate plugins in languages other than C++ and ECMAScript and how he always told them that’s not possible and probably a huge amount of work to get working. Well, that wasn’t entirely true:
If the target application has a SMOKE lib that wraps its API and an according bindings extensions, it works pretty much out of the box, thanks to the gorgeous KPluginFactory API and of course thanks to our gorgeous bindings.
Kate was a bit trickier to get working though, because it used old and deprecated API to load plugins. Well, I filed a merge request on gitorious about that, fixed a bug in KRubyPluginFactory and now I’m very proud to present

The absolutely gorgeous, simple, straight-forward, packed with all the Ruby goodness, very first Ruby Hello-World Kate plugin:

require 'kate'

# The module name has to be the capitalized name of the containing directory.
# In this case, the directory name is kate_ruby_test, so the module name _has_
# to be KateRubyTest.
module KateRubyTest

  # The main class name is the capitalized name of the main script. In this case it's 'test.rb',
  # so the main class name is 'Test'.
  class Test < Kate::Plugin

    # initializer
    def initialize parent, args = nil
      # call Kate::Plugin's constructor
      super parent, 'kate-ruby-hello-world'
    end

    def createView mainWindow
      # @componentData is automatically set to the plugin's component data after the constructor has run
      return KateRubyHelloWorldView.new mainWindow, @componentData
    end

  end

  class KateRubyHelloWorldView < Kate::PluginView
    slots 'slotInsertHello()'

    def initialize mainWindow, componentData
      super mainWindow

      # create the action defined in the ui.rc file
      @guiClient = Kate::XMLGUIClient.new componentData
      action = @guiClient.actionCollection.addAction('edit_insert_helloworld')
      action.text = KDE::i18n('Insert Hello World')
      connect action, SIGNAL('triggered(bool)'), self, SLOT('slotInsertHello()')

      # and add our client to the gui factory
      mainWindow.guiFactory.addClient @guiClient
    end

    def slotInsertHello
      return if self.mainWindow.nil?

      kv = self.mainWindow.activeView

      kv.insertText 'Hello World!' if !kv.nil?
    end

    def readSessionConfig config, groupPrefix
      # do something useful here
    end

    def writeSessionConfig config, groupPrefix
      # do something useful here
    end

  end

end

You’ll also need an ui.rc file that defines the actions to be merged into the Kate main window and a .desktop file which describes your plugin. Actually I’m too lazy to post them all here, so I made a nice package containing everything you need, together with a Makefile which can ‘make install’ the plugin into your home directory.

Since all SMOKE based bindings wrap the C++ API nearly 1:1 and only add syntactic language specific sugar on top, you can create nearly any plugin you like in any language, without modifying the host application, as long as the API is wrapped in a SMOKE lib and a bindings extension. In C# you can even create KIO slaves (a monodoc KIO slave example is shipped with the kdebindings tarball). That feature hasn’t been added to Ruby yet, but is on my ToDo list.

So praise the bindings, praise the KDE plugin infrastructure and start working on Ruby Kate plugins! 🙂

You can find the Ruby Kate binding and the SMOKE kate lib in current kdebindings trunk.

qyotoassemblygen progress

Some time back I have started a tool called qyotoassemblygen [0], which will hopefully ease the generation of .NET/Mono bindings based on SMOKE libraries. It basically introspects a SMOKE library and generates a System.CodeDom tree from that information. The CodeDom can then be used to generate C# code and/or compile an assembly.
Although some parts were rather difficult and had to be rewritten again and again (like checking whether a method has to be marked “override” or “virtual”), I can now announce that it”s quite stable and generates all of the Qt assemblies just fine :). I”m currently working on getting the KDE assemblies to build, which is really just a matter of sorting out *Private classes.

By using plugins the tool is not bound to Qt-based bindings, so we could as well generate assemblies for other toolkits, like Wt (for which there is already a smoke lib, thanks to Richard :)). Plugins will also make it easy to add syntactic sugar like event support for Qyoto (which I hope I can implement for KDE SC 4.5).

If everything works as expected, we can hopefully drop kalyptus completely in the next release and reduce our maintenance cost considerably 🙂

[0] http://websvn.kde.org/trunk/playground/bindings/qyotoassemblygen