Category Archives: Development

Look out for the JDK Performance trap

Would you believe that running the same code on the same system can be up to 100% faster depending on your choice of the JDK you use? (And mind you, I am talking standard Oracle JDKs here, nothing experimental or expensive!)

A couple of weeks ago I decided to run my standard performance tests for the Pentaho Reporting engine. I just upgraded from an six year old computer to a shiny new big, fat machine and was itching to give it a run. All the software freshly installed, the filesystem sparkly clean, lets try and see how the various reporting engine versions run.

And lo and behold, Pentaho Reporting 4.8 seemed to outperform Pentaho Reporting 5.0 by a staggering 100%. Digging into it with JProfiler, I could not find a clear target – comparing the ratios of time spent in all major subsystems was the same between both versions – just that 5.0 was twice as slow at every task than 4.8.

After nearly two weeks of digging, I finally found the culprit: For some arcane reasons I was using a 64-bit JDK for the 4.8 test, but a 32-bit JDK for the 5.0 test run. Using the exact same JDK fixed it.

The numbers: JDK7 – 32bit vs 64bit vs server vs client vm

I have a standard test, a set of reports that print 100k elements in configurations of 5000 rows/20 elements per row, 10000 rows and 10 elements, 20000 rows and 5 elements and 50000 rows and 2 elements.

Here are the results of running Pentaho Reporting 5.0 in different JDK configurations. All times are in seconds.

JVM configuration 5k_20 10k_10 20k_5 50k_2
32bit / -client 3.75 4.31 5.52 9.203
32bit / -server 2.2 2.5 3.2 5.3
64bit / -client 1.92 2.2 2.8 4.75
64bit / -server 1.9 2.18 2.78 4.75

Running Pentaho Reporting 4.8 in the same configuration yields no different results (appart from statistical noise). So with all the major work that went into Pentaho Reporting 5 at least we did not kill our performance.

So the lesson learned is: Choose your JVM wisely, and if you happen to use a 32-bit system or JDK, make sure you use the ‘-server’ command line switch for all your Java tools, or you wait twice as long for your data to appear.

 

CI builds for Pentaho Reporting 5.0

As the Pentaho CI Server had technical difficulties offering report-designer snapshots over the last few weeks (which finally are resolved now) I finally decided to dust off my Jenkins and Artifactory servers to get community builds out there.

And even though the Pentaho CI server now once again offers a Pentaho Report Designer snapshot build, I think it is a good idea to have a second fall over CI server.

So from now on, you can always get the latest build from the CI-Builds page on this blog. I will add a feed for the 3.9-x branch and an additional (inofficial and unsupported) 5.0 branch in a few days.

So go grab your CI build!

Pentaho Reporting extensions points

Pentaho Reporting provides several extension points for developers to add new capabilities to the reporting engine. When you look at the code of both the reporting engine and the report-designer, you can easily see many of the existing modules.

Each extension point comes with a meta-data structures and is initialized during the boot-up process. The engine provides the following extension points:

  • Formula Functions
  • Named Function and Expressions
  • Data-Sources
  • Report Pre-Processors
  • Elements
    • Attributes
    • Styles

Formula functions are part of LibFormula. LibFormula is Pentaho’s adaption of the OpenFormula standard. OpenFormula is a vendor independent specification for spreadsheet calculations. Formula functions provide a very easy way to extend the formula language with new elements without having to worry about the details of the evaluation process. It is perfect if you want to encapuslate an calculation and still be flexible to use it in a general purpose calculation.

Named functions and expressions are the bread-and-butter system to calculate values in a report. Expressions can be chained together by referencing the name of an other expression or database field. Named functions are the only way to calculate values over multiple rows. Adding functions is relatively easy, as named functions only need the implementation as well as the necessary metadata.

Data-Sources are responsible for querying external systems and to provide the report with tabular massdata. Pentaho reporting already ships with data-sources for relational data, OLAP, a PDI data-source that executes ETL-Transformations to compute the data for the report and various scripting options. Adding a data-source is more complex, as an implementor needs to write the datasource, the meta-data and the xml-parsing and writing capabilities. In addition to that, the author needs to provide a UI to configure their new data-source.

With Pentaho Reporting 4.0 we add two additional data-source options, which make it easier to create new data-sources.

The first option uses our ETL tool as backend to parametrize template-transformations. Therefore a data-source developer only has to provide the transformation template, and the system will automatically provide the persistence as well as all dialogs needed to configure the data-source.

The second option uses a small parametrized Java-Class, similar to formula expressions. These calculations, called sequences, are managed by the Seqence-Data-Source, which takes care of all persistence and all UI needs.

Report-Pre-Processors are specialized handlers that are called just before the report is executed the first time. They allow you to alter the structure of the report based on parameter values or query results. These implementations are ‘heavy stuff’ for the advanced user or system integrator.

Last but not least, you can create new element types. Elements hold all data and style information to produce a renderable data-object. The reporting engine expects elements to return either text (with additional hooks to return raw objects for export-types who can handle them), graphics or other elements. An element that produces other elements for printing acts as a macro-processor and can return any valid content object, including bands and subreports.

Element metadata is split into 3 parts. The element itself is a union of the element’s type, attributes and style information. Implementing new basic elements requires you to write a new ElementType implementation (the code that computes the printable content) and to declare all styles and attributes the element uses.

The available style-options are largely defined by the capabilities of the underlying layout engine and thus relatively static in their composition.

An element’s attributes is a more free-form collection of data. Elements can contain any object as attributes. The build-in xml-parser and writer handles all common primitive types (string, numbers, dates and arrays thereof). If you want to use more complex data structures, you may have to write the necessary xml-parser and writer handlers yourself.

A better launcher for the Pentaho Windows clients

shuttle-launcherAs my Macbook Pro gave up after just 4 years of service, and given that Apple removed all serious offers and retained only toys, I am back to using Windows.

But boy, our Windows integration is painful.

On the Mac, I unzip the app-bundle, drag it into “/Applications” and the OS deals with associating it with the prpt files on my disk. Launching the report-designer there feels fairly native (for the amount of work we actually did). Oh, and the JDK is installed and just works. (Well, 1.6, not that crappy Oracle thingie! That’s just buggy like hell.)

On Windows, I get a different story. I have the JDK installed. After the install, I have NOT set the JAVA_HOME, as the installer does not do that. I unzip the report-designer and end up with a directory on the disk that has a report-designer.bat file. Starting that file flashes a console window into my face. Highly technical, and totally 1980’s.

Clicking on a PRPT file in the Explorer, the Report-Designer does not start. Instead I get a dialog telling me that no application can open that file. Well, I wrote the beast, so I am sure that the report-designer can handle PRPT files. And manually fixing that is ugly. No normal user would go through that.

Its time for a change!

Using my rusty C# skills I normally only use for game-development, I created a small launcher that improves the integration of the Pentaho Fat-Clients with Windows. The launcher is simply a smarterway of invoking the batch-files we use for our products.

The launcher hides the ugly command window and deals with setting up and maintaining the file associations with the .prpt and .prpti files. And the best of all: It works for all existing Pentaho Report-Designer installations. Just drop it into any of your PRD-3.7, 3.8 or 3.9 installation and enjoy a modern experience.

This small change greatly hides the fact that our tools are not native Windows applications and makes launching them feel more native.

So download the (experimental) launcher, put it into your report-designer installation directory and give it a try. In case you encounter problems, please give me a shout here or in the Pentaho Forum so that we can iron them out quickly.

Download the Pentaho Report Designer Launcher.

Crosstab update – Pagebreaks and header visibility

prd-screen-capture-6It has been a while since I wrote something about the eternal project. So here’s a quick update.

I just checked in a few changes to the crosstab backend and the magical create-a-crosstab dialog. In addition to selecting the row- and column-dimensions (as usual), you now get a bunch of extra options for your crosstab.

The most interesting ones are the switch from a static width and height (80x20pt) to relative sizes. With that a crosstab now tries to fill the available space as good as possible, expanding and shrinking elements when needed.

Marius requested an option to show title headers for the measures. You can control whether you want such headers (they are there by default) or not. As bonus, you get control over the title headers of your column dimensions as well, in case you like it minimalistic.

Last but not least: When a crosstab is larger than a single page, then we now create proper pagebreaks and preprint the header-section on the next page.

For this release, that basically concludes the feature hunt. Until we actually wrap up and do a release build, its hardening and bug-fixing time. So give it a try, and if you tickle a bug out of it, I would be pleased if you could feed our JIRA beast with it.

PRD-2087: Widows, Orphans and we are all keeping together, aren’t we?

Thomas_kennington_orphans_1885One long-standing, never resourced, never fixed issue we had was the case of managing orphans and widows in reports. Well, with the cold wind of austerity blowing over Europe, we can’t forget the widows and orphans, can’t we?

What are Widows and Orphans?

In typography, an orphan is the single line left of a paragraph left on the previous page. A widow is a lonely line that did not fit on the previous page and now sits alone on the next page. With texts, these rules are somewhat easy to solve, as paragraphs are a flat list and not nested into each other.

In the field of reporting, we usually care less about lines of text, we care about the greater unit of sections. When you create a report, you don’t want a group-header being all alone at the bottom of the page, without at least one more details band to go with it. Likewise, a group-footer should not be the only thing on the last page for that group. The trouble starts when you consider these rules in a deeply hierarchical structure as we see in reports.

Like so many layouting concepts, orphans and widows are easy to explain, but usually a pain to resolve. Orphan and Widow rules are cumulative. When you have nested groups, then the orphan declarations of the outer group cannot be solved in isolation.

Lets take the simple example of a two-level report , where each group declares that it wants at least two sections as orphan area. Assuming the group-headers are filled, it means that group’s header and at least the next section must be kept together. For the outer group, that is the outer group-header and the inner group-header. For the inner group it is the group-header and the first details section.

The inner group’s header is covered by two orphan rules now. It is both part of the first group’s unbreakable section, as well as part of the second group’s section. When rules partially overlap each other both rules must be merged.

Last but not least, in the light of these rules, we now can redefine the ‘keep-together’ (or in PRD speech: Avoid-page-break-inside) as a infinitely large number of orphans in the break-restricted area.

How to use this feature

The Orphans, Widows and Keep-together properties can be defined on any section or band. By default, all root-level bands (details, group-header,footer etc) have a default value for ‘keep-together’ of ‘true’.

The Orphan and Widow style settings take a positive integer as value. Negative values are ignored.

A orphan or widow constraint controls how pagebreaks within that element are handled. A widow or orphan constraint only affects child nodes of the element that has the constraint defined. So if you want to keep a group-header together with the next few detail sections, you have to define the orphan-constraint on the group-element. Defining it on the group-header will not have the desired effect.

The reporting engine treats all root-level bands as elements that count as content in the keep-together calculations. All other elements are ignored for the purpose of the widow-orphan calculations. If you explicitly want an element to be used for these calculations, you can set the style-key ‘widow-orphan-opt-out’ to false on that element.

If a element that counts for the widow-orphan calculation contains other widow-orphan enabled elements, the parent element will be ignored for the widow-orphan calculations.

Elements with an canvas or row-layout form a leaf node for the widow-orphan calculation. Their child elements cannot take part in any of the parent’s widow- and orphan calculations. However, they can establish their own widow-orphan context. Therefore, all subreports, even inline-subreports, can declare widow-orphan rules.

The defaults built into the reporting engine ensure that each section on the report is treated as an element for the widow-orphan calculations, even across subreports.

Performance

Solving widow and orphan rules is a costly exercise. Our reporting engine allows user-calculations and user-defined formatting to react to page break events. This allows you, for instance, to reset row-banding at the beginning of the page, or to format pages differently for odd and even page numbers. And finally, it allows you to update the page-header and page-footer on a page break so that you can show data from the current page on the headers.

If an section is finished (for instance a group has been fully processed), we can safely evaluate widows and orphans for that group.

For ongoing content generation: When an orphan value greater than zero is declared on a section, the engine suspends the layout calculation until enough content has been generated to fulfill all orphan rules currently on the report. Likewise, for widow-calculations the report processing is suspended until more than the number of widow elements have been generated as content – and only those elements that are not marked as covered by the widow-rule will be considered for layouting.

Suspending the layout processing can have a severe negative impact on the report processing time. When the engine suspends the layout-calculation, it keeps the unfinished layout in memory until it reaches a point where the layout can be safely calculated again. In the worst case, this suspends the layouting until the report generation finishes.

Keeping the unfinished layout in memory does consume more memory than the normal streaming report processing. When the engine finally detects a page break that fulfills all orphan and widow rules that are active on the report, it has to roll-back to the state that generated the last visible element on the current page to inform any page-event listener about the page break in the right context. Every rollback is expensive and the reporting engine has to discard any content that had already been generated after that page break, as functions may have reconfigured the report state in preparation or response of the page break.

Orphan calculations are usually less expensive as Widow or Keep-together rules.

However, if you export large amounts of data, try to avoid widow- or orphan-rules on your report. Your report will finish up to 100% faster that way.

 

Finally: This major fix is available for both Pentaho Reporting 3.9 and Pentaho Reporting 4.0. The fix did not make it into this month’s roll-up release for the Pentaho Suite 4.8.1 release, but will be available for the general public in the next roll-up release in July. In the mean time, the fix is in the source code repositories, ready to be checked out and built locally 😉

 

LibCGG – how to render CCC charts without a server

Sunburst_chart_smallThe CGG plugin does a nice job, trouble is: It is vendor locked-in. Lets see whether we can change that.

Years ago the smart guys at Web-Details started to use Protovis to create modern charts for their Dashboards in a project called CCC (Community Chart Components). Inevitably, these charts need to be printed from time to time, so shortly after that they created the CGG-plugin for the Pentaho BI-Server to do that.

I like the Bi-Server. I also like printing. But I don’t like having to have a server running to get my charts as images into a report. So a few weeks ago, I took the CGG plugin and pruned everything that relates to BI-Server specific code. Refactored. Sliced it a bit. As a result, we now have LibCGG, readily available on GitHub.

What is LibCGG

LibCGG is an abstract layer to render CCC/Protovis charts. Its only focus is rendering. It takes the relevant javascript that makes up the charts and produces SVG or PNG output. LibCGG comes with some JUnit test-cases showing that simple samples provided by Web-Details actually run. None of these samples have been modified in any way, they just run.

What is it NOT

LibCGG does not deal with data-sources. It does provide an interface that can be implemented, but it does not come with data-sources itself.
LibCGG does not deal with HTTP requests or even the format in which charts may or may not be stored, defined or delivered to users. It is up to the actual implementation to deal with that. I have modified a version of CGG to use LibCGG as a prove of concept. After all, we dont want to loose functionality, don’t we?

What do we need to use LibCGG in the reporting engine?

At the moment, I have not written any glue code to connect LibCGG with the reporting engine. Ultimately, this will happen though – why else would I care to separate CGG from the server? The barriers are surprisingly low. Pentaho Reporting already handles SVG data, and thus LibCGG needs just a thin wrapper around an existing element for a first show-off.

After that, we will need a chart editor. Pedro assured me that CCC charts come with enough metadata to make it easy to get a basic one up and running quickly. Once we have that, I am sure our UI team will want to come in to make that experience less geeky.

And last but not least: We need to separate CCC from CDA (Community Data Access) a bit. At the moment, there is a silent assumption that CCC charts exclusively communicate with a CDA datasource. It should not be too hard to reroute those calls to directly go to the report’s declared data-sources instead.

And now: The one million dollar question: When .. will it be ready?

With a bit of week-end magic, how about May? April should (hopefully) see us get feature complete on the committed features for Pentaho Reporting 4.0, so there is plenty of time for some Ninja coding. I even have a designated place for it: The ‘extensions-charting’ module, which was reserved for Pentaho’s next-generation charting that never really made it. CCC – be welcome, and never mind the ghosts of past visualizations.

Moving to Git and easier builds

During the last year, as part of my work with the Rabbit-Stew-Dio, I fell in love with Git. Well, sort of, that marriage is not without conflict, and from time to time I hate it too. But when the time came to move all our Pentaho Reporting projects to Git, we all were happy to jump on that boat.

As a result, you can now access all code for the 4.0/TRUNK version of Pentaho Reporting via our GitHub Project. This project contains all libraries, all runtime engine modules and everything that forms the report-designer and design-time tools.

Grab it via:

git clone git@github.com:pentaho/pentaho-reporting.git

Code organization

Our code is split into three groups of modules.

  • “/libraries” contains all shared libraries and code that provides infrastructure that is not necessarily reporting related.
  • “/engine” contains the runtime code for Pentaho Reporting. If you want to embed our reporting engine into your own Swing application or whether you want to deploy it as part of a J2EE application, this contains all your ever need.
  • “/designer” contains our design-time tools, like the report-designer and the report-design-wizard. It also contains all data source UIs that are used in both the Report Designer and Pentaho Report Wizard.

If you use IntelliJ Idea for your Java work, then you will be delighted to find that the sources act as a fully configured IntelliJ project. Just open the ‘pentaho-reporting’ directory as project in IntelliJ and off you go.  If you use Eclipse, well, why not give IntelliJ a try?

Branching system

At Pentaho we use Scrum as our development process. We end up working on a set of features for about 3 weeks, called a Sprint. All work for that Sprint goes into a feature branch (sprint_XXX-4.0.0GA) and gets merged with the master at the end of the sprint.

If you want to keep an eye on our work while we are sprinting, check out the sprint branches. If you prefer is more stable, and are happy with updates every three weeks, stick to the master-branch.

During a Sprint, our CI system will build and publish artifacts from the sprint branches. If you don’t want that, then it is now easy to get your own build up and running in under 5 minutes (typing time, not waiting time).

Building the project

The project root contains a global multibuild.xml file that can build all modules in one go. If you want it more finely granulated, each top level group (‘libraries’, ‘engine’, ‘designer’) contains its own ‘build.xml’ file to provide the same service for these modules.

To successfully build Pentaho Reporting, you do need Apache Ant 1.8.2 or newer. Go download it from the Apache Ant Website if you haven’t done it yet.

After you cloned our Git repository, you have all the source files on your computer. But before you can use the project, you will have to download the third party libraries used in the code.

On a command line in the project directory, call

ant -f multibuild.xml resolve

to download all libraries.

If you’re going to use IntelliJ for your work, you are all set now and can start our IntelliJ project.

To build all projects locally, invoke

ant -f multibuild.xml continuous-local-testless

to run.

If you feel paranoid and want to run the tests while building, then use the ‘continuous-local’ target. This can take quite some time, as it also runs all tests. Expect to wait an hour while all tests run.

ant -f multibuild.xml continuous-local

After the process is finished, you will find “Report Designer” zip and tar.gz packages in the folder “/designer/report-designer/assembly/dist”.

If you get OutOfMemoryErrors pointing to a JUnitTask, or if you get OutOfMemory “PermGen Space” errors, increase the memory of your Ant process to 1024m by setting the ANT_OPTS environment variable:

export ANT_OPTS="-Xmx1024m -XX:MaxPermSize=256m"

Building the project on a CI server

Last but not least: Do you want to run Pentaho Reporting in your own continuous integration server and you want to publish all created artifacts to your own maven-server? Then make sure you set up Maven to allow you to publish files to a repository.

  1. Install Artifactory or any other maven repository server.
  2. Copy one of the ‘ivy-settings.xml’ configurations from any of the modules and edit it to point to your own Maven server. Put this file into a location outside of the project, for instance into “$HOME/prd-ivy-settings.xml”
  3. Download and install maven as usual, then configure it to talk to the Artifactory server.

Edit your $HOME/.m2/settings.xml file and locate the ‘servers’ tag. Then configure it with the username and password of a user that can publish to your Artifactory server.
Replace ‘your-server-id’ with a name describing your server. You will need that later.
Replace ‘publish-username’ and ‘publish-password’ with the username and password of an account of your artifactory installation that has permission to deploy artifacts.

<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"           
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"           
          xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 
                    http://maven.apache.org/xsd/settings-1.0.0.xsd">
   ...
   <servers>
     <server>
       <id>your-server-id</id>
       <username>publish-username</username>
       <password>publish-password</password>
       <configuration>
         <wagonprovider>httpclient</wagonprovider>
         <httpconfiguration>
           <put>
             <params>
               <param>
                 <name>http.authentication.preemptive</name>
                 <value>%b,true</value>
               </param>
             </params>
           </put>
         </httpconfiguration>
       </configuration>
     </server>
   </servers>
    ..
</settings>

Now set up your CI job. You can either override the ivy properties on each CI job, or your can create a global default by creating a ‘$HOME/.pentaho-reporting-build-settings.properties’ file. The settings of this file will be included in all Ant-builds for Pentaho Reporting projects.

ivy.settingsurl=file:${user.home}/prd-ivy-settings.xml
ivy.repository.id=your-server-id
ivy.repository.publish=http://repo.your-server.com/ext-snapshot-local

After that, test your setup by invoking

ant -f multibuild.xml continuous

It should run without errors now. If you see errors on publish, check your Maven configuration or your Artifactory installation.

Conclusion

With the new build structure and the move to Git, it has become tremendously easy to download and work with the Pentaho Reporting source code. Even formerly daunting tasks like setting up an CI server have become simple enough to be documented in a single blog post.

Enjoy!

Introducing the Pentaho Reporting compatibility mode

Every time I worked on the heart of Pentaho Reporting, the layout system, in the past, I wondered how the heck am I going to ensure that I do not break our customers existing reports – again.

Before we started work on the crosstab mode, we put a tiny layer of safety onto the engine by creating a set of ‘golden sample’ reports. A golden sample is the pre-rendered output of a report. Each time we make a change, our automated tests generate the output again and compare it with the known good output we have stored.

Over the last four long weeks (where I expected only two to spend) I rewrote large parts of the layout system. Crosstabs are a more dynamic structure than banded list-reports. While banded reports only grow downwards to fill an endless stream of pages, crosstabs grow both horizontally and vertically. The crosstab expands to the right for each new value of the column dimensions it finds, and it expands vertically, when the engine prints more row-dimension values.

The newly introduced table-layout system that powers the crosstabbing requires stricter rules for the layout elements to arrange them in a sensible fashion. Ordinarily, we want the resulting layout to be minimal (use as little space as possible, within the constraints set by the designer), stable (produce the same layout every time) and performant (don’t make me wait).

The old layout rules, however, were historically grown. They evolved around bugs, misunderstandings and the desperate need to not break reports already created ages ago. Breaking reports is fun – if fun includes loosing customers or getting angry calls. I value my sleep, so no more breaking reports for me, if I can avoid it.

From now on, Pentaho Reporting contains a brand new compatibility layer. This layer emulates all the old and buggy behavior to get a report output that is as close to the original release as possible. Our main concern with the compatibility is not necessarily to emulate show-stopper bugs, but to avoid those subtle changes where your report elements start slightly shifting around. When that happens, you can end up with either more pages than before, overlapping elements (and thus lost data in Excel and HTML exports) or anything in between.

How does it work?

Since Pentaho Reporting 3.9.0-GA, each report file contains a version marker in the “meta.xml” file contained in each PRPT-file. When we parse a report, we also read that version number and store it as the default compatibility setting. The report-designer preserves this setting over subsequent load/save cycles, so editing an old report in PRD-4.0 does not automatically erase or replace that marker.

We consider reports without a marker to be old reports that must have been created with PRD-3.8.3 or an even earlier version. Of course, the reporting engine treats any of the ancient xml-formats and the PRD-3.0 “.report” files as ancient and gives them the version number “3.8.3”.

When a report is executed, the report processor checks the compatibility marker. If it is an pre-4.0 marker, we enable our first compatibility mode. The mode changes how elements are produced and how styles are interpreted.

The most important part change is, that a defined ‘min-width’ or ‘min-height’ automatically serves as a definition for a ‘max-width’ and ‘max-height’. There are additional rules, for instance, we ignore layout settings on structural sections, like groups or the group-bodies.

The most important rule, however, is: If you have a legacy report, it cannot contain tables, and thus cannot contain any crosstabs. Tables require a proper interpretation of the layout rules. The old rules tend to contradict each other from time to time, which causes great distress to the table-calculations.A distressed table-layout calculation may commit suicide or may throw away your data, so we better do not allow that to happen.

So before you can start to use newer features in the reporting system, you have to

Migrate your reports

Report migration is the process of rewriting a report’s layout definition to match the new layout rules. During that rewrite, we try to keep the layout as closely as possible to the original. While we are at it, we remove some invalid properties (like layout styles on groups) and migrate the sizing to the updated width/height system (not using the min-max height hack).

You can initiate the migration by entering the migration dialog via “Extra->Migration”. The dialog will list what will happen to your report, and will prompt you to save your report before the migration starts. The migration cannot be undone with the “Edit->Undo” function, so this saved report is your security blanket for the migration.

If you are sure that your report will be fine without the rewrite, you can manually force a report to a different compatibility level via the “compatibility-level” attribute on the master-report object. Be aware that this voids your warranty – your report may run just fine, or it may blow up completely. All bets are open.

Once the migration is done, your report should work as before, but within the corset of the new, and stricter, layout rules.

And to be sure: Let me repeat it – you only need to migrate reports when you want to use new features on them. Your old and already published reports will continue to work just fine without any manual intervention.

Bonus Content: Min/Max and Preferred width and height

Until recently, the layout system was not able to handle the layout constraints for minimum, maximum and preferred sizes correctly. The safe and default option was to rely on the minimum sizes only. The system magically treated all minimum sizes as maximum sizes for most cases, unless the element had a dynamic-height flag set or had an explicit maximum width or height.

With PRD-4.0, the layout system uses better rules with less contradictions. Therefore it is safe now to rely on the preferred size for most cases.

In reports in PRD-4.0 compatibility, a minimum size defines an absolute minimum, and the element will not shink below that size. The maximum size defines the absolute maximum, if defined, then the element will never grow larger than that. The preferred size defines a flexible recommended size. In most cases this is the size your box will use.

But if your element has content that requires more space, it will get it (up to the limit imposed by the maximum size). Each element computes a ‘minimum chunk size’ – think of it as the largest word in text – and uses the maximum of the chunk-size and the defined preferred-width as effective size.

Try our new compatibility mode. See whether it preserves your reports, and if not, please, please, file me a bug-report!

Style Sheets in Pentaho Reporting 4.0 – I blame Marius

As a small deviation from the usual crosstab and layouter work, I cleaned out the style system of Pentaho Reporting.

It all started not long ago on a stormy night. Creepy rays of moonlight, dark clouds looming over the mountains, wolves howling. On this day, Marius was doing some experiments in getting cascading stylesheet styling into the reporting engine. (I suspect lightning rods and wild laughter was on the menu as well.)

His approach reminded me on the dreadful task I had to go through for the next release. Our style system, a ghoulish monster, has overstayed its welcome. Being ashamed of its nature, we did not expose it to anyone. So this rotten pile of code was sitting around, festering and stinking. I had to kill it. It was an act of mercy for all of us. 

With the style crud gone, the reporting engine now has a simplified style system. With the monster, all style definitions on elements always showed a resolved picture of the global style status. For example, each label’s style fully queried all parent elements to compute the effective font information whenever anyone queried it.

In the new world elements no longer maintain that state. Styles are now resolved as part of the report processing after the style and attribute expressions have been evaluated. It not only simplifies cloning (a lot!), it also tunnels style computation through a single code path.

And then there came polly .. ah, well, the release of Suite 4.8, and the customary fun-week, where the daily drill is suspended for exploring the possibilities of the code.

Mix parts of LibCSS – the zombie that wanted to be a CSS-styled better reporting system – with the single code-path of style resolving, and you have CSS3 selector on top of the existing reporting style-system.

Pentaho Reporting now has a fully fledged style system. Any master-report contains a style-definition, which is basically what you know as style-sheet from HTML. Each style-definition consists of style-rules, where each rule has CSS3-selectors attached.

Style-definitions can either be part of the report definition itself or they can be stored as external files so that they can be shared between reports. The external files have the extension “.prptstyle” and are xml-files containing standard element style definitions.

The Pentaho Report Designer allows you to edit both internal and external style definitions. Use ‘Window’->’Style Definition Editor’ to edit standalone files, or use ‘Format’->’Edit Style Definition…’ to edit the internal style definition.

Both editor allow you to load and save ‘prptstyle’ files. Loading a style-file in the internal editor has the effect of importing that external style-definition into the local report, and saving exports it.

So don’t wait, download your own multi-functional report-designer now!

If you download within the next 24 hours, you not only get a style-definition editor, you also get this:

The ‘master-report’ object now has a new attribute called ‘style-sheet-reference’. Set this to a file-name or URL and Pentaho Reporting will load an external stylesheet when the report runs. Put a ‘prptstyle’ file on a public server and all your reports can share the same style definition.

All elements now also have two new attributes called ‘style-class’ and ‘id’ (not to be mixed with ‘xml-id’ for the HTML export) which allow you to match style-rules to elements. Like HTML’s ‘class’ attribute, class is a whitespace separated list of style-class-names that should apply to the element.

And don’t forget: The green-plus means that all of these attributes can be computed via a style-expression.

Go grab your PRD preview now!