Increase Import File Size in Test Link

Just a quick reminder post…
By default, Test Link comes with a limit of 400 KB to import tests specifications. To raise this limit, there are two files to update.

The first one is the php.ini file.
On Linux, you can find its location with the php --ini command. Find the upload_max_filesize parameter and increase its size. This is a global PHP setting.

You then need to update the configuration of TestLink.
Find the file and change the $tlCfg->import_file_max_size_bytes parameter.

You might want to update the $tlCfg->import_max_row and $tlCfg->repository_max_filesize settings too.

I personally run it with Docker Compose (Test Link 1.19.x).
Here are the values I use:

  • upload_max_filesize (php.ini): 20 MB
  • $tlCfg->import_file_max_size_bytes ( 8192000
  • $tlCfg->import_max_row ( 100000
  • $tlCfg->repository_max_filesize ( 8 // MB

Mocking Nexus API for a Local Maven Repository

For the Roboconf project, the build of our Docker images relies on Sonatype OSS repository.
We use Nexus’ Core API to dynamically retrieve Maven artifacts. That’s really convenient. However, we quickly needed to be able to download local artifacts, for test purpose (without going through a remote Maven repository). Let’s call it a developer scope.

After searching for many solutions, we finally decided to mock Nexus’ API locally and specify to our build process where to download artifacts. We really wanted something light and simple. Besides, we were only interested by the redirect operation. Loading a real Nexus was too heavy. And we really wanted to use a local Maven repository, the same one that developers usually populate. The idea of volume was very pleasant.

So, we made a partial implementation of Nexus’ Core API.
We used NodeJS (efficient for I/O) and Restify. Restify allows to implement a REST API very easily. And NodeJS comes with a very small Docker image (we took the one based on Alpine).

The server class is quite simple.
We expect the local Maven repository to be loaded as a volume in the Docker container. We handle SHA1 requests specifically, as developers generally do not use the profile that generate hashes.

'user strict';
var restify = require('restify');
var fs = require('fs');

 * Computes the hash (SHA1) of a file.
 * <p>
 * By default, local Maven repositories do not contain
 * hashes as we do not activate the profiles. So, we compute them on the fly.
 * </p>
 * @param filePath
 * @param res
 * @param next
 * @returns nothing
function computeSha1(filePath, res, next) {

  var crypto = require('crypto'),
    hash = crypto.createHash('sha1'),
    stream = fs.createReadStream(filePath);

  stream.on('data', function (data) {
    hash.update(data, 'utf8')

  stream.on('end', function () {
    var result = hash.digest('hex');

 * The function that handles the response for the "redirect" operation.
 * @param req
 * @param res
 * @param next
 * @returns nothing
function respond(req, res, next) {

  var fileName = req.params.a +
  '-' + req.params.v +
  '.' + req.params.p;

  var filePath = '/home/maven/repository/' +
    req.params.g.replace('.','/') +
    '/' + req.params.a +
    '/' + req.params.v +
    '/' + fileName;

  fs.exists(filePath, function(exists){
    if (filePath.indexOf('.sha1', filePath.length - 5) !== -1) {
      filePath = filePath.slice(0,-5);
      computeSha1(filePath,res, next);

    else if (! exists) {
      res.writeHead(400, {'Content-Type': 'text/plain'});
      res.end('ERROR File ' + filePath + ' does NOT Exists');

    else {
      res.writeHead(200, {
        'Content-Type': 'application/octet-stream',
        'Content-Disposition' : 'attachment; filename=' + fileName});

// Server setup

const server = restify.createServer({
  name: 'mock-for-nexus-api',
  version: '1.0.0'

server.get('/redirect', respond);

server.listen(9090, function() {
  console.log('%s listening at %s',, server.url);

Eventually, here is the Dockerfile, which ships NodeJS and our web application to be used with Docker.

FROM node:8-alpine

LABEL maintainer="The Roboconf Team" \

COPY ./*.* /usr/src/app/
WORKDIR /usr/src/app/
RUN npm install
CMD [ "npm", "start" ]

We then run…

docker run -d –rm -p 9090:9090 -v /home/me/.m2:/home/maven:ro roboconf/mock-for-nexus-api

And our other build process downloads local Maven artifacts from http://localhost:9090/redirect
You can find the full project on Github. If anyone faces the same problem, I hope this article will provide some hints.

Using Kubectl under Windows

Just a small tip about Kubectl, the command line client to interact with a Kubernetes master. I recently had to use it with Windows 7 and I met problems at the beginning with it. The basic configuration consists in having a config file under the ~/.kube directory. But for some reason, kubectl was not picking it up.

The solution is to create a system variable in Windows, named KUBECONFIG, and that points to ~/.kube/config (or whatever file you want).

Setting up a YUM / RPM repository on Bintray

Bintray supports many kinds of repositories: Maven, Debian packages, etc. And this is very convenient for open source projects. I recently had to create a repository for Roboconf’s RPM packages. And the least I can say is that it did not work as expected. I made several attempts before it finally works.

The creation of a RPM repository is made through Bintray’s web interface. Create a new repository, set RPM as its type and make sure the YUM metadata folder depth is zero. Here, we assume there will be only one RPM repository contained in this « Bintray repository ».

Create a new RPM repository on Bintray

Once created, add a new package in Bintray.
Packages are these entities that contain versions. In my case, there was only one package that I called main. So far, so good.

I once wrote an article about how to upload binaries on Bintray with their REST API. I will show CURL snippets for the following steps.

First, create a version.

curl -vvf -u${BINTRAY_USER}:${BINTRAY_API_KEY} -H "Content-Type: application/json" \
	-X POST ${BINTRAY_URL}/packages/<organization>/<repository-name>/<package-name>/versions \
	--data "{\"name\": \"${RELEASE_VERSION}\", \"github_use_tag_release_notes\": false }"

Then, upload binaries (RPM) to this package.

for f in $(find -name "*.rpm" -type f)
	echo "Uploading $f"
	curl -X PUT -T $f -u ${BINTRAY_USER}:${BINTRAY_API_KEY} \
		-H "X-Bintray-Version:${RELEASE_VERSION}" \
		-H "X-Bintray-Package:<package-name>" \
		-# -o "/tmp/curl-output.txt" \
	echo "$(</tmp/curl-output.txt)"

To publish them, I generally recommend to do it through the web interface of Bintray. This is the manual step for verification before publishing. But you could use Bintray’s REST API too.

Once your binaries are published, I guess you will want to test them. Personnaly, I like to use Docker to test my installers (but whatever). Get a (virtual) machine and click the set me up link on Bintray’s page. There are not that many documentation on Bintray, but these set me up links generally show whatever you need.

In my case, it indicated the following steps.

Set me up instructions on Bintray

There was indeed a file at But when I typed in yum install roboconf-dm, I got a 404 error (not found) about a file called repodata/repomd.xml.

Bintray indicates metadata files should be generated automatically. Obviously, it was not the case there. Hopefully, there is a REST command that allows to force the generation of these metadata.

curl -X POST -u ${BINTRAY_USER}:${BINTRAY_API_KEY} \<organization>/<repository-name>

Then, yum install roboconf-dm worked as expected.
I hope this article will help you. I would have saved 2 hours if I have had all these things right at the beginning.