Having “fun” with JSR-303 Beans Validation and OSGi + Spring DM

Most Java standards are implemented using an api jar and an concrete implementation jar. The API constists maninly of interfaces but a class that looks up the default implementation. This lookup normally happens by trying to load a specific resource bundle which is located in the implementation jar. This can lead to “interesting” classloader problems when used with OSGi and Spring DM specifically
Read more »

Serializing XText Models

Eclipse Galileo is out a while and it ships with TMF XText 0.7 (formerly openArchitectureWare XText). Amongst other things it contains scoping of references, which I really like and especially the ability to serialize your model back to its text representation without comments being lost.

This is especially convenient if you are currently using some other means of generating code, for example hibernate tools. The serialization feature gives you an easy possibility to transform your existing model to an xtext model from within java code. Even better create updatable models, where you can manually edit the Xtext representation and keep the model up to date from an external source.

Example for loading and saving an XText model:

public static void main(String[] args) {
    ResourceSet resourceSet = new ResourceSetImpl();
    URI uri = URI.createFileURI("myapplication.domainmodel");
    Resource resource = resourceSet.getResource(uri, true);
    Model model = (Model) resource.getContents().get(0);

    Entity entity = DomainModelFactory.eINSTANCE.createEntity();


    HashMap saveOptions = new HashMap();
    saveOptions.put(XtextResource.OPTION_FORMAT, Boolean.TRUE);
    resource.save(new FileOutputStream("myapplication.domainmodel"), saveOptions);

The classes in bold face are generated from your Xtext grammar.

Detached Service Methods with Spring AOP

In a typical spring based server application there is a service layer responsible for infrastructure tasks and communication with the clients. Service methods can be long running and execution would normally block the user-interface. Putting the client side end of the call in a thread solves the blocking but leaves the network connection open while the server side of the call is active. This can lead to all sorts of time-out issues and seems to be the wrong end to fix the problem .

Using spring AOP it is remarkably easy to declaratively enable service methods to execute detached from the regular call context.

The Goal

public myServiceMethod() throws DeferredExecutionException {

This method would execute with the following sematics:

  1. The regular method call is wrapped in a thread
  2. The thread is executed
  3. If the thread does not finish within x seconds, raise a checked exception
  4. If it does finish in time, return the result of the method

This way clients of this method are forced to handle detached execution (i.e. show a progress dialog querying the server) and the client side of the call not langer lasts longer than a set time.
Read more »

Is Model Driven Development Faster?

Recently a colleague asked me if model driven software development (MDSD if you like acronyms) is really faster than traditional development. The question is hard to answer because that really depends on the specific set of tools you use, the abilities of your team and so on. Although development time in one of the key arguements for model driven development, the question might be wrongly put. Other factors like software quality and vertical consistency (accross all layers of the application) outweight the time to market argument in my opinion.

Model driven development is often still seen as gray theory by a lot of people but once you experienced it you never want to go back. A lot of the modern web frameworks like ruby on rails, django, cakephp, grails etc. utilize to some extend code-generation methods borrowed from model driven development. Those frameworks are making these techniques more popular and familiar with developers. Stil, those frameworks focus on providing aid for the mundane tasks like generating O/R mappers and scaffolding user interfaces but are not as extensive as a well-tailored model driven development process which goes far beyond just generating certain artefacts.

When doing MDSD you basically have few choices
A) Buy-In: Buy either one of the so called 4GL development environments from another vendor, or buy one toolset of the several CASE-Tool developers (eventually OMG MDA compliant)
B) Roll-your-own: Define your own DSL (domain specific language) for the problem domain and either complety roll your own toolchain (model editor, code generators, validators, …) or use one of the mature open source frameworks like openArchitectureWare or AndroMDA and build on those

Approach A: Buy-In


  • Development time is indeed lower than traditional development if the metamodel fits your problem domain
  • You can buy trainings
  • You benefit from the experience of the vendor that is put in the toolchain


  • Vendor lock-in: The lack of standards for templating languages ties you to the vendor. Changing gets expensive and if the vendor goes out of business you are in trouble. Even basic things like the XMI format for UML diagrams is not 100% interoperable
  • Steep learning curve: The developers are required to learn yet another set of technologies.

Approach B: Roll-your-own


  • Very flexible: You can design the development process excactly as you see fit
  • The quality of the open source frameworks is high, no vendor lock-in
  • You can evolve your meta-model or your DSL perpetually during the project. All team members get familiar with the new language and concepts


  • Slower: you have to develop the developer tool chain as well

General Properties


  • Overal vertical consistency if done right: Constraints are enforced in every tier, fields are named consistently, the user interface doen’t have local quirks. Developers can only to a certain degree deviate from the coding standards.
  • It lessens the burden of mudane tasks like writing yet another xml file to configure a object relational mapping or a deployment descriptor.
  • A bug that is introduced by a code generation step can easily removed from all parts of the application.
  • Adding aspects to parts of applications is very easy and just require the code generation or your (meta-)model to be changed.
  • Higher level of abstraction: Developing on this conceptual level is a lot easier as soon as the team got used to it.


  • MDSD is only applicable if the project is large enough
  • The project must have enough internal recurring structures.
  • Higher level of abstraction: Developing on this conceptual level is very different to traditional development. Not every developer feels comfortable with it and during the first MDSD project it requires re-thinking of habits.
  • Debugging modelling errors is a pain: You have to debug the problem in the actual code and follow all the way up to the actual problem (model or code generator), fix it there, re-generate the code, test, debug and eventually start over again.

Textual DSLs vs. UML / Graphical DSLs

Developers are used to working with text files and prefer textual DSLs. Textual DSLs can noadays easily be created using for example the XText project from the openArchitectureWare framework. Business Users are used to graphical languages and prefer those. To satisfy both ends it might be worthwile to have a graphical as well as a textual representation of the model that is fully synchronized.

Open Problems

  • (Semantic) versioning can be tough. Manually merging XMI files is no fun, textual DSLs in most cases lack semantic versioning information. I do not refer to storing a textual DSL in a version control system but rather storing model changes as part of the model that can be utilized for let’s say generating database migration scripts or versioning external (web-)services.
  • Collaborative editing is only evailable in enterprise versions of UML tools
  • Proprietary template languages for code generation
  • Debugging generic problems is slower than just fixing a bug in-situ. I haven’t yet seen any good tools that support debugging of the actual model.


Model driven development is only faster if applied to a family of software applications. In most cases the first application developed with the new methodology will have a longer time-to-market but developing is a lot more rewarding in terms of quality, consistency and general developer happyness. If done right additional advantages like semantic versioning arise.

KeyPosé – Flavour your Screencasts with Shortcuts

How often have you found yourself recording a screencast annoyed by the fact that only the mouse is visible? I haven’t seen any screen recording software that provided an easy way to display shortcuts on screen as you type.

Most of the fancy UI stuff happens in OS X land (see Mouseposé ) but I needed the functionality under Windows. Microsoft introduced the layered window api in Windows 2000 which makes it possible to create alpha-blended windows with any shape.

So on a chilly afternoon I sat down and wrote a litte utility that implements a global key logger which diplays all shortcuts and keystrokes as you type – on screen and with a semi-transparent overlay. This makes it very easy to show any shortcuts you use if you for example record a software demo.

To get a feeling what it looks like: Screencast

Tip: when doing the recording with TechSmith’s Camtasia make sure to enable the “record layered windows” checkbox.

The software is provided AS IS free for private and academic use without any warranty or implied applicability – read the included license in the about box. For commercial use, drop me a line.

System Requirements: Windows 2000 (with GDI+) / Windows XP / Windows Vista
Download:  KeyPose Download

Full End-User Reporting with ReportBuilder and DataAbstract

Since a lot of People have been asking about a successful integration of digital metaphor’s wonderful Reportbuilder with RemObjects’ DataAbstract I thought it might be a good idea to post what I have been doing to integrate these two in an elegant fashion.


  • Give the end-user access to the full spectrum of possibilities, including design of queries, Report-Designer Preview, Report Explorer and so on.
  • When running reports within you application server, make use of direct database access for increased speed.


  • Write a DADE plug-in (data driver) that abstracts where the report is actually executed
  • On the client-side: Use a locally available Channel and Message to access a ReportService on the server-side which allows arbitrary SQL to be executed. This has already been solved by Wouter Devos of XLent Solutions and I will extend his solution.
  • On the server-side: Access the active service the report is executed within and execute sql generated by ReportBuilder’s DADE directly without the overhead of a network round-trip


The whole point of a multi-tier setup is to abstract the fact that there is a database from the client, it should not send or know about sql in any case. By allowing the execution of arbitrary SQL by the client you give blank access to all information. To minimise the potential security risk, the associated transaction should be read-only to prevent any updates by the client. Additionally you should specify a security token that is only granted to selected users (“End-User Reporting Allowed”).

Because all data that is being retrieved is re-packaged and transmitted over the wire ultimately ending up in a memory dataset the resulting speed is definitely slower than direct database access and you should likely limit the number of records returned to a reasonable number, so the client does not “hang” while waiting for the whole database to be pulled to local memory.

Technical Details


On the client implement one interface in a singleton (your hydra host would be a good candidate) and assign the global variable in unit daDataAbstract.pas.

  { Implement this interface on the client. Assumptions:
    - Single thread on the client (UI)
    - One connection to one server
    - The Remoteservice.Message.ClientId is implicitly used for session management

    set global variable gReportingClientController on the client

  IReportingClientController = interface
    function GetRemoteService: TRORemoteService;
    function GetDataStreamer: TDADataStreamer;

This will provide the minimal means to connect to your server


On the server-side you obviously need to implement a service that hosts a TppReport Component and gives necessary information to the client. This service must implement the following interface

   { Implement this interface in the Service that contains the Report component.
    When retrieving data on the server, Report.Owner is querie'd for this
    interface. Fall-back to local service discovery (FindClassFactory), and
    instantiate the reporting service (slower than the former).

  IReportingServiceRBuilderSupport = interface
    function GetConnection: IDAConnection;
    function GetDataStreamer: TDADataStreamer;
    procedure GetTableNames(out TableNames: string);

This interface is used when executing reports “locally” within the server context. The class factory for the reporting service should be a pooled one, since we want to utilize the positive effects of full multi-threading.

The service needs to implement a few additional methods, so the client can gather the necessary information and could look like this

  { TReportingService }
  TReportingService = class(TRORemoteDataModule, IReportingService,
    BinDataStreamer: TDABinDataStreamer;
    bpReportFolders: TDABusinessProcessor;
    bpReportItems: TDABusinessProcessor;
    ppReport: TppReport;
    ppReportItemWithData: TppDBPipeline;
    tblReportItemWithData: TDACDSDataTable;
    dsReportItemWithData: TDADataSource;
    Schema: TDASchema;
    procedure RORemoteDataModuleActivate(const aClientID: TGUID;
      aSession: TROSession; const aMessage: IROMessage);
    procedure RORemoteDataModuleDeactivate(const aClientID: TGUID;
      aSession: TROSession);
    FConnection: IDAConnection;
    { IReportingService methods }
    // Needed for report explorer only
    function GetAllReportItems: Binary;
    function GetReportTemplateData(const ReportItemId: Integer): Binary;
    function UpdateReports(const Data: TROBinaryMemoryStream): TROBinaryMemoryStream;
    constructor Create(AOwner: TComponent); override;

    //Called by client-side reporting
    function GetReportDatasetData(const SQL: string; const TableName: string;
      const LoadAll: Boolean): TROBinaryMemoryStream;
    function GetReportDatasetSchema(const SQL: string;
      const TableName: string): TROBinaryMemoryStream;

    // Only needed if the client should use human readable translated reporting meta-data
    function GetDictionaryData: TROBinaryMemoryStream;

    // Custom method to execute a certain report on the server-side
    function GenerateReportPreview(const ReportItemId: Integer;
      const Params: TReportParamList): TROBinaryMemoryStream;

    // IReportingServiceRBuilderSupport
    function GetConnection: IDAConnection;
    function GetDataStreamer: TDADataStreamer;
    procedure GetTableNames(out TableNames: string);

The most important part of the implementation is

function TReportingService.GetReportDatasetSchema(const SQL,
  TableName: string): TROBinaryMemoryStream;
  ds: IDADataset;
  Result := Binary.Create();
  BinDataStreamer.Initialize(Result, aiWrite);
    if SQL = '' then
      ds := FConnection.NewDataset(Format('SELECT * FROM %s WHERE 1 = 0',
                                   [TableName]), 'dsReportData')
      ds := FConnection.NewDataset(SQL, 'dsReportData');

    ds.Prepared := True;
    BinDataStreamer.WriteDataset(ds, [woSchema]);

function TReportingService.GetReportDatasetData(const SQL, TableName: string;
  const LoadAll: Boolean): TROBinaryMemoryStream;
  ds: IDADataset;
  Result := Binary.Create();
  BinDataStreamer.Initialize(Result, aiWrite);
    if SQL = '' then
      ds := FConnection.NewDataset(Format('SELECT * FROM %s WHERE 1 = 0',
                                          [TableName]), 'dsReportData')
      ds := FConnection.NewDataset(SQL, 'dsReportData');


    if LoadAll then
      BinDataStreamer.WriteDataset(ds, [woSchema, woRows], -1)
      BinDataStreamer.WriteDataset(ds, [woSchema, woRows], 0);

procedure TReportingService.GetTableNames(out TableNames: string);
  Tables: IROStrings;
  TableNames := Tables.CommaText;


This article is just a brief overview of what is actually needed to get a fully integrated reporting solution and there are a lot of nuts and bolts involved in the full solution that I will not show here. I hope this solution will help a lot of people who just want to get something working.

Download link: DADE-Plugin for DataAbstract