Single Record UI pattern

Single record UI pattern has been recently added to Openbravo ERP r2.50 development branch (pi). This new pattern allows to define a tab as update only not allowing new record insertions nor deletions. This is useful for some special tabs that show information which the user can edit but not modify or create new records, usually the number of record is fixed or it is created by a process but not directly in the tab. An example for this is the System Information tab, which shows global information for the system, therefore it makes sense to change some of its values but never to create a new record; a different case where this pattern is now used is in the Business Partner > Customer sub tab, when the Business Partner is a customer this tab contains the information related to this customer, records in this tab shouldn’t be directly created because they must be created in the parent one.

To implement this pattern a new concept has been added to the tab definition: UI Pattern. Each tab has a single UI pattern that can be selected among the values in a new drop down list.

Currently there are three patterns:

  • Standard. It is the usual representation that allows creating, modifying and deleting records.
  • Read Only. It only allows to view the information but not to modify it. This pattern is not new in Openbravo ERP, but it has changed the way to define it. Previously there was a boolean value that set the tab to be read only, now that field has been deprecated and it is defined as a pattern in a new drop down list of values.
  • Single Record.Which is the one described in this post.

Update: This feature will be included within Openbravo ERP 2.50 MP3

Advertisements

New document: database model

Currently we are working on the Developers Guide for Openbravo ERP r2.50. Within these efforts, last week we have published the database model documentation. This documentation is automatically generated from the application’s on line help, and consists in the description for all tables in database and their columns. It is structured in Database_Model/Package/Table, so each table is included in its package chapter, the information for the columns in the table is included as sections in the table’s page.

The aim for all this, is not to have a readable document from head to bottom but to have a easily maintainable reference to be linked from any other document in the wiki, our goal is also to improve on line help and thus, enhance in parallel this document.

If you have any comment on this document: what you like/dislike, what would you include or improve, etc. please let me know and we’ll try to add this feedback.

EDITED on 4-7: This document replaces the Openbravo ERP ER published for previous releases.

Using Mercurial’s bisect extension to find bugs

One week ago Openbravo ERP code was moved from Subversion to Mercurial. I am completely new to Mercurial and to distributed SCMs since I always worked with Subversion but, apart from the new concepts it incorporates, the transition was very smooth, at least till the moment. I’ve spent some time during the last days looking the extensions Mercurial has and, for me, one of the nicest ones is bisect. Bisect can be very useful to find the changeset when a bug was introduced in the code.

A real example

Recently I was assigned this bug. I discovered that it was not present in  r2.40 but it was in development branch, furthermore, I find out the bug was caused because a line had been removed from the code. At this point I wanted to know which commit removed that line, just to know if it was a mistake or it was done on purpose trying to fix another bug, so the tedious work started: given two revisions one that has the bug (head of development branch) and another one that has not it (r2.40 tag) try different revisions in between to find which one removed the line that causes the bug. This is not only tedious but also very time consuming.

A good solution: bisect

Bisect is an extension for Mercurial that makes this kind of work much faster. Bisect’s behavior is pretty simple, you tell it the good and bad changesets and it updates your working copy to another one between them, then you can test if in that one the bug is present and you mark it as good or bad, then the process is run again until it finds the changeset that introduces the bug.  In fact bisect just decides for you which is the next changeset to test in. It sounds not to be a very high improvement, but if you combine it with some simple (at least in some occasions) automated test the results can be awesome.

Let me explain it through a simple example:

  • First prepare the environment: a file with a lot of lines, and a commit somewhere there removing a line which we’ll look for afterwards.
$ hg init testBisect
$ cd testFile
$ cat testFile
This is
a file
with some feature
and this
line here
is needed to work fine
$ hg ci -A -m "init file"
adding testFile
$   for (( i = 0; i < 335; i++ )); do echo "line"$i >> testFile; hg commit -m "change here"; done
$ sed -i 's/line here//' testFile
$ hg ci -m "this commit is buggy"
$ for (( i = 0; i < 872; i++ )); do      echo "line"$i >> testFile;      hg commit -m "change here";      done
$ hg parents
changeset:   1208:7568a581b554
tag:         tip
user:        Asier Lostalé
date:        Mon Mar 02 11:13:39 2009 +0100
summary:     change here
  • Now we have 1208 changests! Let’s make the script to decide whether a revision is buggy or not and to continue looking in case it is not:
$  cat test1.sh
#!/bin/sh
MIN_ARGS=2
if [ $# -lt $MIN_ARGS ]; then
  echo "Usage: $(basename $0) FILE TEXT_TO_FIND" >&2
  exit 1
fi
FILE=$1
shift
TEXT_TO_FIND=$*
check() {
   grep -q "$TEXT_TO_FIND" $FILE && RESULT=good || RESULT=bad
   echo $RESULT

   hg bisect --$RESULT
}
while :
do
  if check | grep -q 'Testing changeset'
then
  echo
  hg bisect
else
  hg bisect
  exit 0
fi
done
  • Now we are ready to start testing, first of all reset bisect and tell it which is the bad and good known revisions. Bad is current one and good is the first one.
$ hg bisect --reset
$ hg bisect --bad
$ hg bisect --good 1
Testing changeset 604:9d6a42635e81 (1207 changesets remaining, ~10 tests)
1 files updated, 0 files merged, 0 files removed, 0 files unresolved
  • Finally just execute the test to find out who removed the line.
$ time ./test1.sh testFile 'line here'
Testing changeset 302:132a5339324e (603 changesets remaining, ~9 tests)
0 files updated, 0 files merged, 0 files removed, 0 files unresolved
Testing changeset 453:aa92eb899545 (302 changesets remaining, ~8 tests)
0 files updated, 0 files merged, 0 files removed, 0 files unresolved
Testing changeset 377:5c8e69bdb1ce (151 changesets remaining, ~7 tests)
0 files updated, 0 files merged, 0 files removed, 0 files unresolved
Testing changeset 339:05f7bb18e505 (75 changesets remaining, ~6 tests)
0 files updated, 0 files merged, 0 files removed, 0 files unresolved
Testing changeset 320:3107aee2dbd2 (37 changesets remaining, ~5 tests)
0 files updated, 0 files merged, 0 files removed, 0 files unresolved
Testing changeset 329:0d907ee53cdb (19 changesets remaining, ~4 tests)
0 files updated, 0 files merged, 0 files removed, 0 files unresolved
Testing changeset 334:8a0d38375333 (10 changesets remaining, ~3 tests)
0 files updated, 0 files merged, 0 files removed, 0 files unresolved

Testing changeset 336:dc2037e24dfc (5 changesets remaining, ~2 tests)
0 files updated, 0 files merged, 0 files removed, 0 files unresolved
Testing changeset 335:939ca611ae0f (2 changesets remaining, ~1 tests)
0 files updated, 0 files merged, 0 files removed, 0 files unresolved
The first bad revision is:
changeset:   336:dc2037e24dfc
user:        Asier Lostalé
date:        Mon Mar 02 11:11:52 2009 +0100
summary:     this commit is buggy
real    0m1.374s
user    0m1.128s
sys    0m0.204s

And we are done: in less than 1.5s we know which is the commit that removed the line!

Though this example is quite theoretical I think bisect is a very good solution for this kind of searches. Do you have experience with this extension? Any comment about it is welcome.

New build process: keep it simple

(Old) Problems

For Openbravo ERP developers (specially for newcomers) it has always been difficult to decide which of the available build tasks was the best election to build the system after any development they had done. We had to take into account what modifications were done to know which ant task we should run. For example if we had modified a window we would use ant compile -Dtab=myWindow to generate the code just for that window and not for the rest of them.  It was even worse when working with subversion, each time we updated our working copy we had to look which were the files that had been updated to know if it was necessary to run update.database to synchronize Openbravo model database(database schema objects and applicatoin dictionary data) from XML files. And in case there were modifications there it was worth re-generate all the WAD windows because it was difficult to know which ones had been modified. So many times compile.complete was the “safe” but slow choice.

Upadate.database task had two more inconvenients: the first one was that in case Openbravo model was modified locally and not exported (ant export.database), when executing this task we would lose all the changes done in database for application dictionary. This annoying behavior had been reported as a bug. The other inconvenient was that during the r2.50 development cycle, specially because of the usage of DAL as part of the update process, this task was pretty unstable, making people not to be very confident about using it. As result people felt safer by recreating the whole system  (ant install.source) instead of executing a much faster incremental build (ant update.database compile.complete).

New task: smartbuild

For Openbravo ERP r2.50 we have resolved these problems by simplifying the build process with a new incremental build task: smartbuild, which is currently available in trunk (r12753) and will be release in the next alpha (aplha r11). This task performs all the required processes to build your  system but only the required ones, with a huge improvement in performance. It checks whether the database needs to be updated from xml sources and performs the update only if needed, generates the code that needs to be regenerated, compiles and deploys it.

The goal of this smartbuild is to replace most of the rest of tasks, making life a little simpler for developers. So now it is only needed to use two tasks: smartbuild for all the builds and export.database to export database to xml files. export.database is now smart to export only if needed, skipping the process if no changes have happened in the local Openbravo model.

Moreover update.database ensures before updating that no local changes have occured in Openbravo model since the last synchronization (export.database or update.database) to prevent people loosing their changes. In case of changes, people will be required to export their database before updating it.

How it works

  • Determine if database needs to be updated. To do this smartbuild generates a checksum for the xml files and compares it with an existent one. This one is generated each time database is synchronized from xml files or to xml files. If the two checksums are different it means that xml files are different so database is updated.
  • Decide which code needs to be re-generated. Whenever a build process is done a timestamp with the current time is stored in database. This timestamp is compared with the audit info for the application dictionary objects that participate in the code generation so now WAD is able to generate code only for those elements that have been created or modified after the last build. Additionally when exporting database to xml files the audit info is not longer exported and when updating the audit info is recalculated for the current time, thus it also works in case the modifications in application dictionary came from an update.database. There’s only one case when this check doesn’t work: it is when application dictionary elements are modified directly in databse through insert/update SQL statements without updating audit info. In this case the developer will have to generate the code in the old way (using compile -Dtab=modifiedWindows).
  • Check if database has been changed. This check allows to export only if there’re changes in database and prevents data loses when updating database. To check this it is used the same timestamp as in the first point. Modifications in data are calculeted by DAL and modifications in database structure are queried directly to database. The query for database structure last modification has no problem in Oracle becuase User_Objects table stores the last physical change for each database object, but in PostgreSQL that information is not stored in database. This has been solved for PostgreSQL generating a checksum in database from all the elements in database that can be exported to xml files, that’s the reason why in PostgreSQL this check takes longer than in Oracle.

Multithread safe servlets

This posts explains the multithreading safety problem and explains how it can be prevented. I’m writing about it because recently I’ve fixed a bug in Openbravo ERP related with issue, and I would like to remind developers about this problem to take it into account.

The way Tomcat manages servlets is creating one instance of the object and having multiple threads invoking methods on that instance, each of these threads is serving each of the multiple simultaneous request. Thus a single servlet instance can be used at the same time by different users.

All this must be kept in mind when developing servlets in order to prevent dirty global properties readings. Let me explain through a little example, let’s define a servlet:

public class Test extends HttpServlet {
private String st;

  public void doPost(HttpServletRequest request, HttpServletResponse response)
         throws IOException, ServletException {
    st = readStringFromSomewhereElse();
    //... do other stuff here
    System.out.println(st);
  }
}

In this case there could be two users accessing the doPost method simultaneously using the same instance of Test class, the first one could set the st as “A” and before printing its value the second one could set it as “B”, the the first one would print “B” instead of “A” as it would expect. In this case this property would have a static behavior.

So this pattern should be avoided when developing servlets, this is, there should not be global properties modified in the doPost method (or in any other one called from it).  Generally, this can be easily solved using variables inside the doPost method instead of global properties. In fact, global properties should be initialized just once in the init method and not modified afterwards.

New stuff for Openbravo ERP developers

Recently modularity project has been merged back to Openbravo ERP 2.50 trunk. Apart from the features described within the project, some other useful utilities have been developed, these features are usable from r2.50 by any development. Some of them are:

GenericTree

GenericTree

GenericTree allows to represent an ajax tree for any tree data structure in Openbravo ERP. It draws the user interface for the tree and manages the ajax calls for opening and closing nodes.

GenericTree is an abstract Java class that can be extended by other classes to implement different trees. These subclasses just need to populate a FieldProvider object with the information for the concrete tree. Once this is done in order to show an ajax tree it is only needed to instantiate a subclass.

Let’s see some code:

This is the Java piece of code

ModuleTree tree = new ModuleTree(this);
tree.setLanguage(vars.getLanguage());
//Obtains a tree for the installed modules
xmlDocument.setParameter("moduleTree", tree.toHtml());
//Obtains a box for display the modules descriptions
xmlDocument.setParameter("moduleTreeDescription", tree.descriptionToHtml());

And here is the HTML side


<tr>
<PARAMETER_TMP id="
moduleTree"/> <!-- Prints module tree 4 cols -->
<td/>
<td/>
<tr>

<tr>
<PARAMETER_TMP id="moduleTreeDescription"/> <!-- Prints module tree desc  4 cols -->
<td/>
<td/>
<tr>

In this example ModuleTree class is a GenericTree’s subclass which implements the queries for the tree of modules, creating a new instance of this class and setting the toHTML() in the HTML template will display the User Interface and manage all the ajax requests.

FieldProviderFactory

This class allows to transform any object with setter methods into a FieldProvider object. This is useful to represent this non-FieldProvider object within a structure inside a xmlEngine template.

Here’s the code:


WebServiceImplServiceLocator loc = new WebServiceImplServiceLocator();
WebServiceImpl ws = (WebServiceImpl) loc.getWebService();
Module module = ws.moduleDetail(recordId);
ModuleDependency[] dependencies = module.getDependencies();
xmlDocument.setData("dependencies", FieldProviderFactory.getFieldProviderArray(dependencies));

In this example ModuleDependency is a class that has setter methods, so to use it to fill a HTML template it is possible to convert it into a FieldProvider using FieldProviderFactory class.

AntExecutor

AntExecutor is able to execute any ant task from any build.xml file. It is also possible to set different loggers, for example a file log or an OBPrintStream which can be used to display the generated log in real time.

This is a basic example that just creates a new AntExecutor for a buil.xml file, adds a task and a property and executes the task.


AntExecutor ant=new AntExecutor("/path/to/build.xml");
Vector tasks = new Vector();
tasks.add("apply.modules");
ant.setProperty("module", "test1");
ant.runTask(tasks);

Zip

Zip class zips and unzips files.

It is easy to use:


Zip.zip("/path/to/zip/", "/file/to.zip");
Zip.unzip("/file/to/un.zip", "/path/to/unzip");

Subversion 1.5 merge problems

Last days we have had several problems trying to merge two branches using subversion. I wanted to merge trunk to modularity but I always obtained this error:

svn: Working copy path 'lib/runtime' does not exist in repository

This happened using any merge command (svn merge modularity, svn merge trunk@r1 trunk@r2…).
It seems to be related with subversion issue 3067 and the only way to make it work was checking out the svn branch that solves this issue compiling it and using it to do the merge. The steps to do that are:

1) svn co http://svn.collab.net/repos/svn/branches/issue-3067-deleted-subtrees/ svn-mod
2) cd svn-mod
3) ./autogen.sh
4) ./configure
5) make

After doing the merge using that svn client the working copy cannot be used anymore with the old svn client.