Loading the content of a local file in Firefox’s web extensions

Since Firefox 57, browser extensions must be web extensions.
For security reasons, these API do not allow to interact with the local file system. There are several strategies described on the web site as alternatives (the IndexedDB being one of the most interesting one IMO).

In this article, I will only describe how one of your script can load a local resource that was put within your extension. I used it to load a XML file in a background script.

First, make sure you make your resource available for web access.
Add this in your manifest:

"web_accessible_resources": ["path/to/my-resource.xml"],

And then, in your script:

var localUrl = browser.extension.getURL('path/to/my-resource.xml');
const req = new XMLHttpRequest();
// To parse the remote document directly as a DOM document
// req.responseType = "document";

req.onreadystatechange = function(event) {
  if (this.readyState === XMLHttpRequest.DONE) {
    if (this.status === 200) {
      console.log("Received: %s", this.responseText);
    } else {
      console.log("Error: %d (%s)", this.status, this.statusText);

req.open('GET', t, true);

And that’s it.
This snippet will only print the file’s content, but you can adapt it for you needs.

Increase Import File Size in Test Link

Just a quick reminder post…
By default, Test Link comes with a limit of 400 KB to import tests specifications. To raise this limit, there are two files to update.

The first one is the php.ini file.
On Linux, you can find its location with the php --ini command. Find the upload_max_filesize parameter and increase its size. This is a global PHP setting.

You then need to update the configuration of TestLink.
Find the config.inc.php file and change the $tlCfg->import_file_max_size_bytes parameter.

You might want to update the $tlCfg->import_max_row and $tlCfg->repository_max_filesize settings too.

I personally run it with Docker Compose (Test Link 1.19.x).
Here are the values I use:

  • upload_max_filesize (php.ini): 20 MB
  • $tlCfg->import_file_max_size_bytes (config.inc.php): 8192000
  • $tlCfg->import_max_row (config.inc.php): 100000
  • $tlCfg->repository_max_filesize (config.inc.php): 8 // MB

Custom Setup Task in OOMPH and namespace conflict

OOMPH is a solution that helps to install official and custom Eclipse distributions.

Those who use it for their own distro can extend its behaviour thanks to setup tasks. A setup task is made up of both an EMF model (that extends the setup.ecore/#SetupTask element from OOMPH) and Java code. This code is partially generated by EMF. People only have to complete the perform method to make it do something at runtime. Obviously, OOMPH provides a wizard to help in the creation of such a thing.

However, I recently had to maintain an existing set of setup tasks. And when I opened the genmodel file for my tasks, I had a weird error message in the genmodel editor.

EMF error due to conflicting namespaces

The exact error message indicates…

Problems encountered in the model
- The package 'http://www.eclipse.org/oomph/setup/1.0#/' has the same namespace URI 'http://www.eclipse.org/oomph/setup/1.0' as package 'platform:/resource/org.eclipse.oomph.setup/model/Setup.ecore#/'
- The package 'http://www.eclipse.org/oomph/setup/1.0#/' has the same namespace URI 'http://www.eclipse.org/oomph/setup/1.0' as package 'platform:/resource/org.eclipse.oomph.base/model/Base.ecore#/'

That’s a weird message.
Even worse, it does not appear if you create a new setup task project. I compared everything: the models, the project settings… everything.

Anyway… One important thing is that this message is not blocking. The EMF editor is made up of two tabs. When such an error is found, this editor shows the problems tab. But the generator tab is still available and you can perform generations anyway. So, you can ignore the message. Or, you can rid of it by following the explanations below. Notice this just a workaround.

Taking a detailed look at the error message, it indicates that two EMF projects from OOMPH export the same package. In fact, both packages export different classes but within the same namespace. And they reference each other (Setup extends classes from Base). Anyway, EMF does not know which package pick up as both could match.

The workaround for this is to update the ecore model.
Indeed, the generated ecore contains…


The super type is resolved by namespace.
If you reference it by the location of the ecore model, that will solve the problem.


The Setup classes extends the Base ones.
So, you can directly reference the Setup.ecore file. You can also update your genmodel file with the URL of the existing generators.

usedGenPackages="platform:/resource/org.eclipse.oomph.base/model/Base.genmodel#//base platform:/resource/org.eclipse.oomph.setup/model/Setup.genmodel#//setup"

… instead of…

usedGenPackages="../../org.eclipse.oomph.base/model/Base.genmodel#//base ../../org.eclipse.oomph.setup/model/Setup.genmodel#//setup"

Eventually, you will opt for a solution that prevents the genmodel from rewriting the ecore file. Just remove the publicationLocation attribute from your genmodel. Otherwise, every time you generate code from your genmodel file, it will rewrite the super types in your ecore file. Definitely not what you want.

PS: I have still not understood why the error sometimes appears.
In my case, the ecore file defined several setup tasks in the same file. My other example did not. Maybe that’s the reason.

Bouncycastle, OSGi and uber-jars

I have recently tried to use SSHj in an OSGi bundle.
I had decided to wrap this library and its dependencies in my own bundle (something equivalent to an uber-jar but compliant with OSGi). In a classic context, you would use the Maven Shade plug-in. In an OSGi context, you can simply use the Maven Bundle plug-in, with the embedded-dependency directive.

Anyway, the difficulty here came because SSHj uses Bouncycastle as a security provider. And you cannot do what you want with it. The first attempt to build my all-in-one bundle resulted in a signature error during the Maven build.

Invalid signature file digest for Manifest main attributes

Indeed, some files in the JAR were signed, and some others were not. I solved it by using the Maven ANT plugin, removing signature files from my jar and repackaging it. The JAR could finally be built. Unfortunately, later at runtime, another error came out.

JCE cannot authenticate the provider BC

Looking at the logs of SSHj, no provider was found while the right classes were in the bundle’s class path. And everything was working outside of OSGi. So, there was no error with compilation levels or in my code. For some reason, Bouncycastle was not loaded by the JVM.

The explanation is that JCE (Java Cryptography Extension) providers are loaded in a special way by the JVM. First, they must be signed. And it seems not any certificate can be used (it must be approved by the JVM vendors). Bouncycastle is signed. But if you wrap it into another JAR, you will lose the signature. And then, these providers must be loaded at startup. If you apply this to an OSGi context, it means you cannot deploy Bouncycastle as a bundle whenever you want.

Finally, I solved my issue by…

  • … following Karaf’s documentation and making Bouncycastle a part of my Karaf distribution (copy it in lib/ext and updating some configuration files). See this POM to automate such a thing for you custom Karaf distribution.
  • … not importing org.bouncycastle.* in my packages. That’s because putting these libraries under lib/ext means these packages are considered as root classes (just like java.*). No need to import them then.
  • … making sure all the bundles that depend on Bouncycastle would use the same version. I achieved it by updating the dependencyManagement section in my parent POM.

And although it was not a real issue, I decided to put SSHj as Karaf feature. This way, no need to make an uber-jar. And I can reuse it as much as I want. See this file for a description of this feature (ssh4j-for-roboconf). The dependencies that are not already OSGi bundles are deployed through the wrap protocol.

I spent about a day to solve these issues. That’s why I thought an article might help.