Docs
Search…
Python OME-TIFF Example (CPU)
This Python example performs the geometric transformation, its aim is to rotate and shift the input image.

Introduction

Why using CPU?

There are several reasons why we want to provide you an example for CPU. First of all the processors used by APEER are very powerful e.g. they can handle multiple threads. For many basic image processing tasks this is completely enough to get your results. Furthermore many libraries are only written for CPU or are not optimized for GPU.
The APEER platform therefore provides an example with is executed on the CPU. If you want to have more computational power e.g. for a deep learning module you can learn about how to move from CPU to GPU.

How to start?

In order to run a module we need at least these files:
  • module_specification.json: This file defines the in- and outputs of the module as well as the respective UI components.
  • Dockerfile: This file includes the instruction to automatically build a so called image, i.e. it defines what operating-system, applications, etc. the image consists of.
  • <main_file>.py: The main file with the execution function that performs the desired operation. In our case it is: geometric_transformation.py.
  • apeer_main.py: The python file which will be executed on the APEER platform

Python OME-TIFF Example Module

Writing the module

The module_specification.json can include several nested objects in the first layer. For our CPU example two of them are enough: the spec and the ui. The spec defines the input and output contract for the module and the ui tells how to render the ui to the user when he runs the module on APEER.
Step by step (doing it right)
Copy & paste (quick and dirty)
For this module, we need the following inputs:
  • input_image: image to which the geometric transformation will be applied
  • angle: angle on which the image will be rotated
  • shift_x: shift of the image in the width direction
  • shift_y: shift of the image in the height direction
1
{
2
"spec": {
3
"inputs": {
4
"input_image": {
5
"type:file": {}
6
},
7
"angle": {
8
"type:string": {},
9
"default":"0.0"
10
},
11
"shift_x": {
12
"type:string": {},
13
"default":"0"
14
},
15
"shift_y": {
16
"type:string": {},
17
"default":"0"
18
}
19
},
Copied!
As an output we will get a geometrically transformed image
  • output_image
1
"outputs": {
2
"output_image": {
3
"type:file": {}
4
}
5
}
6
},
Copied!
For the UI, we need to render the input parameters.
1
"ui": {
2
"inputs": {
3
"input_image": {
4
"index": 0,
5
"widget:none": null,
6
"label": "Input image"
7
},
8
"angle": {
9
"index": 1,
10
"label": "Rotation Angle in degrees",
11
"widget:textbox": {
12
}
13
},
14
"shift_x": {
15
"index": 2,
16
"label": "Shift x in px",
17
"description": "Shift x in px (must be integer number)",
18
"widget:textbox": {
19
}
20
},
21
"shift_y": {
22
"index": 3,
23
"label": "Shift y in px",
24
"description": "Shift y in px (must be integer number)",
25
"widget:textbox": {
26
}
27
}
28
},
29
"outputs": {}
30
}
31
}
Copied!
Putting it all together, our module_specification.json file looks like this:
module_specification.json
1
{
2
"spec": {
3
"inputs": {
4
"input_image": {
5
"type:file": {}
6
},
7
"angle": {
8
"type:string": {},
9
"default":"0.0"
10
},
11
"shift_x": {
12
"type:string": {},
13
"default":"0"
14
},
15
"shift_y": {
16
"type:string": {},
17
"default":"0"
18
}
19
},
20
"outputs": {
21
"output_image": {
22
"type:file": {}
23
}
24
}
25
},
26
"ui": {
27
"inputs": {
28
"input_image": {
29
"index": 0,
30
"widget:none": null,
31
"label": "Input image"
32
},
33
"angle": {
34
"index": 1,
35
"label": "Rotation Angle in degrees",
36
"widget:textbox": {
37
}
38
},
39
"shift_x": {
40
"index": 2,
41
"label": "Shift x in px",
42
"description": "Shift x in px (must be integer number)",
43
"widget:textbox": {
44
}
45
},
46
"shift_y": {
47
"index": 3,
48
"label": "Shift y in px",
49
"description": "Shift y in px (must be integer number)",
50
"widget:textbox": {
51
}
52
}
53
},
54
"outputs": {}
55
}
56
}
Copied!
Let us have a look on the script that holds the main logic of our module and performs the actual operation,geometric_transformation.py:
geometric_transformation.py
1
import os
2
from scipy import ndimage as ndi
3
from apeer_ometiff_library import io, processing
4
import skimage
5
import numpy as np
6
7
def execute(image_path, angle, shift_x, shift_y):
8
9
image_name = os.path.basename(image_path)
10
if image_path.lower().endswith('.ome.tiff') or image_path.lower().endswith('.ome.tif'):
11
# Read original image
12
(array5d, omexml) = io.read_ometiff(image_path)
13
# Return value is an 5D Array of order (T, Z, C, X, Y)
14
# Apply 2D function to 5D array
15
arrayOut5d = processing.apply_2d_trafo(_geometric_transformation,
16
array5d,
17
angle=angle,
18
shift_x=shift_x,
19
shift_y=shift_y)
20
21
# In case you have a 3D function that acts on a whole Z-Stack use this code:
22
# The order of the 3D input image of trafo3D should be: (Z,X,Y)
23
#arrayOut5D = processing.apply_3d_trafo_zstack(trafo3d, array5D, angle)
24
25
# In case you have a 3D function that acts on a whole RGB image use this code:
26
# The order of the 3D input image of trafo3D should be: (C,X,Y)
27
#arrayOut5D = processing.apply_3D_trafo_rbg(trafo3d, array5D, angle)
28
29
io.write_ometiff(image_name, arrayOut5d, omexml)
30
else:
31
# Read original image
32
array = skimage.io.imread(image_path)
33
if array.ndim == 3: # Process image with color channels
34
result = np.zeros_like(array)
35
for channel in range(array.shape[2]):
36
result[:,:,channel] = _geometric_transformation(array[:,:,channel], angle, shift_x, shift_y)
37
elif array.ndim == 2: # Process grayscale image
38
result = _geometric_transformation(array, angle, shift_x, shift_y)
39
# Write modified image
40
skimage.io.imsave(image_name, result)
41
42
return {'output_image': image_name}
43
44
def _geometric_transformation(image2d, angle, shift_x, shift_y):
45
image_rotated = ndi.rotate(image2d, float(angle), reshape=False)
46
image_shifted = ndi.shift(image_rotated, (float(shift_x), float(shift_y)))
47
return image_shifted
48
49
# Test Code locally
50
if __name__ == "__main__":
51
execute("input/nucleiTubolin.ome.tiff", 45.0, 20, 50)
52
execute("input/nucleiTubolin.jpg", 45.0, 20, 50)
Copied!
Pay attention that the script contains two functions executeand _geometric_transformation. The actual geometric transformation is applied per every channel of the input image within _geometric_transformation while execute contains processing.apply_2d_trafo that is applied to the whole 3D image with the help of _geometric_transformationpassed as a parameter.
On APEER platform the apeer_main.py file will be executed. It looks as follows:
apeer_main.py
1
from apeer_dev_kit import adk
2
import geometric_transformation
3
4
5
if __name__ == "__main__":
6
inputs = adk.get_inputs()
7
8
outputs = geometric_transformation.execute(inputs['input_image'], inputs['angle'], inputs['shift_x'], inputs['shift_y'])
9
10
adk.set_file_output('output_image', outputs['output_image'])
11
adk.finalize()
Copied!
For this example we use the preconfigured python:3.6 docker container within the Dockerfile. With the VOLUME command we specify the additional directories that are going to be mounted later by the Dockerfile for sending files to and from the container. This is WFE convention and will always be the same. Create both the /input and /output directories inside your project folder. Place the test image inside the input folder, e.g. Path/to/project/folder/input/nucleiTubolin.ome.tiff.
Additionally we install numpy and scipy as external python libraries specified in requirements.txt.
Dockerfile
requirements.txt
1
FROM python:3.6
2
3
WORKDIR /usr/src/app
4
5
COPY requirements.txt .
6
7
RUN pip install --no-cache-dir -r requirements.txt
8
9
COPY apeer_main.py .
10
COPY geometric_transformation.py .
11
COPY module_specification.json .
12
13
# mount volumes
14
VOLUME [ "/input", "/output" ]
15
16
ENTRYPOINT [ "python", "./apeer_main.py" ]
Copied!
1
apeer-dev-kit>=1.0.6,<2
2
apeer-ometiff-library>=1.3.1,<2
3
4
scipy==1.1.0
5
numpy==1.15.4
Copied!

Testing the module

In order to test the module locally from within the docker image, we need to pass the input values and file paths to the module. The value of the WFE_INPUT_JSON is stored in wfe.envfile and looks like this:
wfe.env
1
{
2
"WFE_output_params_file":"wfe_module_params_1_1.json",
3
"input_image":"/input/nucleiTubolin.ome.tiff",
4
"angle":90.0,
5
"shift_x":20,
6
"shift_y":50
7
}
Copied!
This tells our module to expect the input image at /input/nucleiTubolin.ome.tiff and sets the rest of input parameters. Outputs are written to the /output/ directory by default.
wfe.envfile needs to be passed to the docker container as an environment variable inside docker_make.sh file to set the WFE_INPUT_JSON environment variable.
docker_make.sh
1
#!/bin/bash
2
3
docker build -t 'apeer/geometric_transformation' .
4
5
docker run -it --rm -v $(pwd)/input:/input -v $(pwd)/output:/output -e "WFE_INPUT_JSON=$(<wfe.env)" apeer/geometric_transformation
Copied!
After this two files are changed local tests can be run just by executing sh ./docker_make.sh in the terminal. If everything works fine then the results will appear in the /output/ folder. In order to test with another set of input parameters - make changes in wfe.env file and run sh ./docker_make.sh again.

Fast lane for impatient coders ;)

‌Of course we have prepared the zipped project folder and some ready to run files for copy and paste:
apeer_main.py
docker_make.sh
Dockerfile
geometric_transformation.py
module_specification.json
requirements.txt
wfe.env
1
from apeer_dev_kit import adk
2
import geometric_transformation
3
4
5
if __name__ == "__main__":
6
inputs = adk.get_inputs()
7
8
outputs = geometric_transformation.execute(inputs['input_image'], inputs['rotation_angle'], inputs['shift_x'], inputs['shift_y'])
9
10
adk.set_file_output('output_image', outputs['output_image'])
11
adk.finalize()
Copied!
1
#!/bin/bash
2
3
docker build -t 'apeer/geometric_transformation' .
4
5
docker run -it --rm -v "$(pwd)"/input:/input -v "$(pwd)"/output:/output -e "WFE_INPUT_JSON=$(<wfe.env)" apeer/geometric_transformation
Copied!
1
FROM python:3.6
2
3
WORKDIR /usr/src/app
4
5
COPY requirements.txt ./
6
RUN pip install --no-cache-dir -r requirements.txt
7
8
COPY ./apeer_main.py .
9
COPY ./geometric_transformation.py .
10
COPY ./module_specification.json .
11
12
# mount volumes
13
VOLUME [ "/input", "/output" ]
14
15
ENTRYPOINT [ "python", "./apeer_main.py" ]
Copied!
1
import os
2
from scipy import ndimage as ndi
3
from apeer_ometiff_library import io, processing
4
import skimage
5
import numpy as np
6
7
def execute(image_path, angle, shift_x, shift_y):
8
9
image_name = os.path.basename(image_path)
10
if image_path.lower().endswith('.ome.tiff') or image_path.lower().endswith('.ome.tif'):
11
# Read original image
12
(array5d, omexml) = io.read_ometiff(image_path)
13
# Return value is an 5D Array of order (T, Z, C, X, Y)
14
# Apply 2D function to 5D array
15
arrayOut5d = processing.apply_2d_trafo(_geometric_transformation,
16
array5d,
17
angle=angle,
18
shift_x=shift_x,
19
shift_y=shift_y)
20
21
# In case you have a 3D function that acts on a whole Z-Stack use this code:
22
# The order of the 3D input image of trafo3D should be: (Z,X,Y)
23
#arrayOut5D = processing.apply_3d_trafo_zstack(trafo3d, array5D, angle)
24
25
# In case you have a 3D function that acts on a whole RGB image use this code:
26
# The order of the 3D input image of trafo3D should be: (C,X,Y)
27
#arrayOut5D = processing.apply_3D_trafo_rbg(trafo3d, array5D, angle)
28
29
io.write_ometiff(image_name, arrayOut5d, omexml)
30
else:
31
# Read original image
32
array = skimage.io.imread(image_path)
33
if array.ndim == 3: # Process image with color channels
34
result = np.zeros_like(array)
35
for channel in range(array.shape[2]):
36
result[:,:,channel] = _geometric_transformation(array[:,:,channel], angle, shift_x, shift_y)
37
elif array.ndim == 2: # Process grayscale image
38
result = _geometric_transformation(array, angle, shift_x, shift_y)
39
# Write modified image
40
skimage.io.imsave(image_name, result)
41
42
return {'output_image': image_name}
43
44
def _geometric_transformation(image2d, angle, shift_x, shift_y):
45
image_rotated = ndi.rotate(image2d, float(angle), reshape=False)
46
image_shifted = ndi.shift(image_rotated, (float(shift_x), float(shift_y)))
47
return image_shifted
48
49
# Test Code locally
50
if __name__ == "__main__":
51
execute("input/nucleiTubolin.ome.tiff", 45.0, 20, 50)
52
execute("input/nucleiTubolin.jpg", 45.0, 20, 50)
53
54
Copied!
1
{
2
"spec": {
3
"inputs": {
4
"input_image": {
5
"type:file": {}
6
},
7
"angle": {
8
"type:string": {},
9
"default":"0.0"
10
},
11
"shift_x": {
12
"type:string": {},
13
"default":"0"
14
},
15
"shift_y": {
16
"type:string": {},
17
"default":"0"
18
}
19
},
20
"outputs": {
21
"output_image": {
22
"type:file": {}
23
}
24
}
25
},
26
"ui": {
27
"inputs": {
28
"input_image": {
29
"index": 0,
30
"widget:none": null,
31
"label": "Input image"
32
},
33
"angle": {
34
"index": 1,
35
"label": "Rotation Angle in degrees",
36
"widget:textbox": {
37
}
38
},
39
"shift_x": {
40
"index": 2,
41
"label": "Shift x in px",
42
"description": "Shift x in px (must be integer number)",
43
"widget:textbox": {
44
}
45
},
46
"shift_y": {
47
"index": 3,
48
"label": "Shift y in px",
49
"description": "Shift y in px (must be integer number)",
50
"widget:textbox": {
51
}
52
}
53
},
54
"outputs": {}
55
}
56
}
Copied!
1
apeer-dev-kit>=1.0.6,<2
2
apeer-ometiff-library>=1.3.1,<2
3
4
scipy==1.1.0
5
numpy==1.15.4
6
7
cloudpickle==0.6.1
8
dask==0.20.2
9
decorator==4.3.0
10
networkx==2.2
11
Pillow==5.3.0
12
PyWavelets==1.0.1
13
six==1.11.0
14
toolz==0.9.0
Copied!
1
{
2
"WFE_output_params_file":"wfe_module_params_1_1.json",
3
"input_image":"/input/nucleiTubolin.ome.tiff",
4
"rotation_angle":90.0,
5
"shift_x":20,
6
"shift_y":50
7
}
Copied!
You can also download the source code by creating a new 'Python CPU Example' Module in APEER.
Python CPU Example.zip
3MB
Binary
Python OME-TIFF Example Source Code

You couldn't find all the information you need? Contact us!

If you need help, just check out our FAQ. For any further questions please contact us at [email protected] or have a look at How-tos section in our blog or follow us on Twitter to stay up to date.
Last modified 2yr ago