Execute Script Processor

When I came across this requirement I try to look into JoltTransformJSON processor, but here we can modify or delete the json attributes and also we can append new attributes to your flow file, but this allow us to insert static attributes, but not dynamic.

After my long search I found executescript processor, by this processor we can write a python program and we can achieve the requirement.


Execute script allow us to modify the incoming flow file and its content, in this blog I will explain how to do it.

Execute script allow us to access few standard objects like session, content, log.

We used InputStreamCallBack and OutputStreamCallback  interfaces to modify the input flowfile content and sending it back.

Execute script support multiple script engines like python, groovy, ECMA script, ruby,lua     and clojure, 

I’m using python which I’m familiar with.

Requirement :

I have a requirement to store customer name with suggested stock details to hbase, like name, suggested stocks and suggested date, From kafka producer I’m getting all suggested stocks related a customer, as single string separated with comma (,).

Example of incoming data: 

{"user": "rajesh", "sug_date":12-31-2020 11:23:03, "sug_stocks": "GOOGL, MSFT"}

{"user": "rohit", "sug_date": "12/31/2020, 16:02:15", "sug_stocks": "MRF, AMZN, TSLA"}

We need to bring output like below.

Required output:

{"GOOGL": "", "sug_date": "12/31/2020, 15:11:27", " MSFT": "", "user": "rajesh"}

{"MRF": "", " AMZN": "", "sug_date": "12/31/2020, 16:02:15", "user": "rohit", " TSLA": ""}


If we observe above input and required output once, input sug_stocks value GOOGL, MSFT become as the keys with empty string as value and we removed sug_stocks attribute.

Publishing input data:

In below code snippet we are sending two customer objects with suggest stocks and publish as kafka message through test topic

import uuid

from datetime import datetime

from kafka import KafkaProducer

import json


def connect_kafka_producer():

    _producer = None


        _producer = KafkaProducer(bootstrap_servers=["localhost:9092"], max_request_size=3222000000)

    except Exception as ex:

        print("Exception while connecting Kafka")



        return _producer


def publish_msg(key, value):

    producer_obj = connect_kafka_producer()

    key_bytes = bytes(key, encoding="utf-8")

    value_bytes = bytes(value, encoding="utf-8")

    topic_name = "test"

    producer_obj.send(topic_name, key=key_bytes, value=value_bytes)



if __name__ =='__main__':

    person_obj1 = {"user": "rohit", "sug_date":"%m/%d/%Y, %H:%M:%S"),

                   "sug_stocks": "MRF, AMZN, TSLA"}

    person_obj2 = {"user": "rajesh", "sug_date":"%m/%d/%Y, %H:%M:%S"),

                   "sug_stocks": "GOOGL, MSFT"}

    publish_msg('person', json.dumps(person_obj1))

    publish_msg('person', json.dumps(person_obj2)) 1


If we execute the above script, message will be published on the consumer kafka we need to read messages.

Here I’m attaching my nifi flow and I will explain each processor that I used


Used to read the published messages which we published using kafka producer. Here we need to do some configurations.

Kafka broker: we need to provide kafka broker host and port

Topic names: I have used test at producer side, so I need to use same topic to read published messages


We got 2 messages that we send from producer


Here I’m adding one propery sug_stocks which we are sending on producer side, it is accessible and do manipulate it. We can see this under attribute section of next processor.



In execute script configuration we add some properties like engine script path and virtual environment path under module path.

In the script we need to modify the json attribute sug_stocks like read value of it prepare it as key.

Input for the script:


Output from after script execution:

We need to insert the above output to our hbase table

Here I’m sharing the script

import json

from import IOUtils

from java.nio.charset import StandardCharsets

from import StreamCallback, InputStreamCallback, OutputStreamCallback



class PyInputStreamCallback(InputStreamCallback):

    def __init__(self):

        self.json_content = {}


    def process(self, inputStream):

        text = IOUtils.toString(inputStream, StandardCharsets.UTF_8)

        self.json_content = json.loads(text)

        # atrribute that we need to modify

        categories = self.json_content["sug_stocks"]

        for x in str(categories).split(","):

            # insert it as a key

            self.json_content[x] = ""

        # removing sug_stocks attribute




class OutputWrite(OutputStreamCallback):

    def __init__(self, obj):

        self.obj = obj


    def process(self, outputStream):



flowfile = session.get()


if flowfile != None:

    py_is = PyInputStreamCallback(), py_is)

    flowfile = session.write(flowfile, OutputWrite(py_is.json_content))

    session.transfer(flowfile, REL_SUCCESS)





While working with executescript, make sure whichever  the latest flowfile object that we had that need to be write, if we write old flowfile object we can get session isn't closed exception inside nifi custom processor


We need to add few properties to Puthbasejson processor like under which table and column family need to store

We can see output in hbase customer table




ExecuteScript :




Publish Publish

Choose Background