February 13, 2018
Posted by Anonymous
(this post is in italian only)
E' inutile dire che industria 4.0 sia un termine molto gettonato ultimamente. Esso racchiude quello che si prospetta essere un nuovo giro di boa per il sistema manifatturiero italiano dove il software entrerà pervasivamente in tutti gli aspetti della catena produttiva. Questo avrà degli effetti non di secondaria importanza che, a mio avviso, è bene mettere in luce sin da ora.

La visione di fondo

Risultati immagini per digitalizzazionePrima di addentrarci in qualche aspetto di dettaglio credo che sia importante mettere a fuoco la visione di fondo che sottende il processo dell'industrializzazione 4.0. Il primo slogan che si potrebbe mettere in evidenza è il seguente:

"Quasi ogni cosa può essere digitalizzata ed automatizzata. Quasi ogni cosa può essere descritta da un modello matematico e da un insieme di dati."

Ma abbiamo capito tutti cosa significa? L'idea è comprendere bene cosa comporti questa digitalizzazione pervasiva di cui si parla per cercare di indirizzarla dove vogliamo noi e non dove vogliono gli altri.

Il problema vero infatti sarà stabilire il dove noi vogliamo che vada perché allo stato attuale, dal punto di vista delle tecnologie ICT, percepisco il paese Italia (nel suo insieme) come un paese che sostanzialmente consuma tecnologie ICT producendone poche e questo aspetto ci fa partire con un po' di ritardo rispetto ad altri paesi che invece governano meglio questo settore. Ma prima di andare avanti facciamoci un'altra domanda non scontata: l'idea di non prendere parte a questa rivoluzione è perseguibile? E' possibile cioè rimanerne fuori? Sarò franco, anche se non ho la sfera magica per leggere il futuro, credo veramente che non prendere parte a questa rivoluzione significhi solamente accettare di andare verso il declino industriale. Cerchiamo allora di mettere a fuoco il valore aggiunto che potrebbe portarci industria 4.0.

Il vero valore aggiunto è quello di essere adattabili come gli artigiani ma veloci e precisi come l'industria

A prima vista il valore immediato sembrerebbe essere quello di aumentare l'automazione per ridurre i costi su larga scala. Ciò è senz'altro vero ma questo era il valore aggiunto che si inseguiva con industria 3.0 quella cioè in cui siamo tutt'ora immersi non con quella 4.0 in cui stiamo per entrare. L'abbassamento dei costi è certamente un aspetto da non sottovalutare ma personalmente non credo che sia questo qui il punto fondamentale. In realtà il vero valore aggiunto è da ricercarsi nel raggiungimento di una fascia alta del valore ossia  in una maggior capacità di adattamento e in una maggiore flessibilità di risposta alle esigenze del mercato.

Avete presente l'artigiano che fondamentalmente si adatta alle richieste del cliente al momento?

"Fammi l'angolino più smussato perché a me piace più così, a poi se in quell'angolo mi ci metti un forellino io poi ci posso attaccare il quadro del nonno che ci tengo tanto".
"Va bene"

L'artigiano dice "va bene" perché lui, che conosce sé stesso e domina la sua arte è in grado di capire come meglio soddisfare le esigenze del cliente, che magari conosce pure personalmente, ed è in grado di adattare con creatività il proprio lavoro utilizzando gli strumenti più opportuni sul momento. Il risultato finale è un prodotto unico, perché unica e personale è la richiesta del cliente ed unica e personale è la capacità dell'artigiano di soddisfarla. Questo è il valore "alto" che va ricercato con industria 4.0. Se non siete d'accordo ditelo, ne parliamo.

L'unicità della relazione

La domanda che dobbiamo farci ora è capire il perché sia necessario perseguire questa strada andando a cercare il valore aggiunto nella capacità di sapersi adattare in modo flessibile. Qui la questione si fa un po' più complessa perché bisognerebbe dotarsi di sfera magica e poter osservare il futuro per poi stabilire quale sia effettivamente la scelta migliore. Chiaramente nessuno può dire con certezza quale sia la strada maestra da seguire, il futuro come sempre è imperscrutabile per chiunque. Tuttavia, un elemento fondamentale può essere colto. Ad oggi la varietà di prodotti disponibili sul mercato è impressionante, di qualsiasi oggetto ne esistono versioni di tutti i tipi per tutti i portafogli, di tutte le qualità. Ho appena digitato la parola violino su Google Shopping e mi si è aperta l'opportunità di comprare un violino da 49 euro così come di acquistarne uno da 8.206,90 euro. Entrambi sono potenzialmente acquistabili da me con pochi click ed una carta di credito. Il pacco verrà recapitato a casa. E' già così senza bisogno di alcuna rivoluzione 4.0. Lo sappiamo tutti. Non è una novità. Che cos'è dunque, nel prossimo futuro, quello che potrebbe differenziare il nostro prodotto o il nostro servizio in mezzo ad un bazar mondiale di queste dimensioni? L'unica possibilità che vedo è quello di rendere il prodotto unico per chi lo compera ed anche per chi lo vende. E l'unico modo di renderlo unico è quello di realizzarlo secondo le esigenze dell'acquirente e secondo l'esperienza del produttore. Fatta questa osservazione diventa evidente come la centralità di tutto passi per la capacità di rendere unica la relazione con l'acquirente, unica l'esperienza di acquisto, unico il prodotto finale ma industrializzata la produzione.

Digitalizzazione, creatività e software come struttura portante

date queste premesse, a mio avviso sono tre gli aspetti fondamentali che vanno presi in considerazione:

1) Una automatizzazione pervasiva di tutte le attività ripetitive.
2) Valorizzazione del personale umano mirato all'aumento della responsabilità individuale e della creatività.
3) Gestione del software annoverata tra le attività principali del sistema industria

Per quanto concerne il primo punto che è quello di carattere tecnico l'idea di fondo è abbastanza semplice: evitare di utilizzare il lavoro umano laddove esso sia solamente un'attività ripetitiva ed automatizzabile. Si tratta chiaramente sempre di fare un bilancio tra costi e benefici ma in generale è inutile sprecare del lavoro umano per fare cose che può fare una macchina. Non ha davvero senso, pensateci. Il concetto qui è che il lavoro umano è un bene prezioso e di valore e per questo motivo non va sprecato.

Occhio però! Perché a prendere troppo alla lettera questa cosa si rischia poi di andare ad automatizzare ciò che non va automatizzato! E sapete quali sono le cose che non si possono automatizzare? la responsabilità e la creatività caratteristiche eccezionali dell'essere umano delle quali ne avrete sempre più bisogno! La cosa interessante è che c'è un motivo logico ed evidente che porta a questa conclusione. Va da sé infatti che se i lavori ripetitivi vengono svolti dalle macchine all'uomo restano quelli non ripetitivi che sono strettamente legati alla capacità di pensiero autonomo (responsabilità) ed alla capacità di avere un pensiero in grado di immaginare (creatività).
Ed è da questa riflessione che passiamo al punto 2) ossia la creazione di un ambiente di lavoro che enfatizzi queste abilità umane.

Alle macchine diamo perciò il compito di essere veloci e precise ed agli uomini lasciamo il compito di essere adattabili e flessibili. Quello che ci manca è solo una cerniera di comunicazione tra il mondo delle macchine e quello dell'uomo, ossia il punto 3: il software.



Il software come sistema nervoso essenziale che unisce la macchina all'uomo


Se idealmente possiamo dire che la macchina o le macchine rappresentano il corpo del nostro sistema azienda 4.0 e la componente umana ne rappresenta il cervello (ed il cuore), allora il software rappresenta il sistema nervoso che permette di trasportare i giusti impulsi dal cervello al corpo. Per questo motivo è molto importante dare la giusta attenzione al sistema informativo che andremo ad utilizzare poiché il risultato finale dipenderà molto da esso. Un sistema informatico mal concepito o mal gestito potrebbe essere infatti la prima causa di rallentamenti, inefficienze se non addirittura di blocco anche in presenza di una buona digitalizzazione e di una buona organizzazione del personale.

Quando si parla di informatica, in generale ricordatevi sempre di una massima che vale sempre:

"Si può fare tutto, dipende solo da quanto sei disposto a pagare"

Quindi il problema non è tanto capire se si può raggiungere un determinato obiettivo andando ad intervenire sul proprio sistema informativo. In generale, a meno che non si parli di viaggi  nel tempo o cose simili, la maggior parte delle esigenze legate all'informatizzazione sono tecnicamente possibili e realizzabili. Il problema è sempre capire quanto costa farlo.
Il vero obiettivo quindi che dovremo porci riguardo alla struttura del nostro sistema informativo è quello di realizzarlo in modo tale da minimizzare i costi di intervento in caso di modifiche e/o cambiamenti perché, credetemi, se intraprenderete una strategia 4.0 di modifiche ne avrete infinite, continuamente, tutti i giorni, le modifiche del software saranno il vostro pane quotidiano. Ed è per questo che il vostro sistema software deve essere sufficientemente elastico da poter essere plasmato e modellato all'occorrenza senza farsi male ogni volta.

Un elefante che si muove troppo lento

Fino ad oggi siamo stati abituati a vedere il sistema informativo aziendale come un insieme di software ciascuno dei quali nato per risolvere un problema specifico. Così abbiamo il gestionale per gestire la parte amministrativa, il sito web per farci trovare su Internet, il sito di ecommerce per vendere online i nostri prodotti, il documentale per archiviare i documenti, il software per la supply chain per gestire la catena di distribuzione, ecc. Tutti questi prodotti hanno sempre funzionato separatamente ed indipendentemente uno dall'altro e tutte le volte che abbiamo avuto la necessità di collegarli abbiamo richiesto un intervento di system integration andando a pianificare un'attività specifica per passare i dati che ci servivano da una parte all'altra. Così è possibile per esempio che per far comunicare l'ecommerce con il gestionale dobbiate esportare dei file su una cartella condivisa e poi schedulare un job di sincronizzazione periodico per effettuare l'importazione. Oppure è possibile che l'integrazione avvenga tramite chiamata a web service, oppure ancora, nei casi peggiori, i dati vengono portati manualmente da un applicativo ad un altro. Questa struttura, anche detta architettura, la potremmo vedere come un grande edificio all'interno del quale ci sono diversi uffici ciascuno dei quali è dedicato ad un'attività specifica. In ognuno di questi uffici ci sarà personale dedicato adibito ad assolvere a determinate funzioni e così via. I diversi uffici si scambiano dei documenti e delle informazioni per poter svolgere le loro funzioni con modi diversi: in alcuni casi si mandano un fax, in altri casi i dipendenti si telefonano, in altri casi ci sono addetti alla posta interna per lo scambio dei documenti.


Un'architettura di questo tipo verrà messa a dura prova da una strategia industria 4.0 e potrebbe rischiare di non reggere al carico. I motivi sono i seguenti:

  1. Il costo da sostenere per le modifiche potrebbe essere esorbitante rispetto alle nuove funzionalità richieste. In molti casi si è costretti addirittura a rinunciare. Questo è il limite principale al quale si andrà incontro poiché lo scenario a tendere che vi dovete immaginare per industria 4.0 non è quello di effettuare interventi sul sistema informativo con una periodicità di un paio di interventi all'anno bensì con una periodicità settimanale se non addirittura giornaliera. Saranno più probabili cioè scenari in cui dovrete fare cambiamenti piccoli ma frequenti piuttosto che cambiamenti importanti ma distanziati negli anni. E questa assunzione porta con sé almeno due conseguenze immediate:
    1. I prodotti che acquistate devono poter essere modificati ed adattati nelle loro funzionalità frequentemente. Questa cosa come immagino sappiate tutti porta a delle complicanze poiché una modifica nativa su un software commerciale non è generalmente a buon mercato, senza parlare poi dei tempi sia di analisi che di sviluppo che sono necessari per realizzarli. Spesso è più semplice rinunciare (cosa in contraddizione con l'adattabilità di cui abbiamo parlato sopra).
    2. I flussi di interscambio dati tra diversi software devono poter essere cambiate frequentemente. Le modalità di integrazione tra i diversi applicativi sono quasi sempre diverse e create ad hoc. Spesso sono poco documentate e la loro gestione, nel peggiore dei casi, viene tramandata oralmente tra gli addetti ai lavori. Le responsabilità su di chi sia la competenza della gestione degli errori riguardanti questi flussi di interscambio è spesso vaga ed ambigua, nella peggiore delle ipotesi è distribuita su più fornitori diversi i quali si rimpallano le responsabilità pur di non doversene sorbire il carico. La gestione di progetti di integrazione è già complessa ad oggi visto che in generale richiede la partecipazione di più fornitori. Come per il punto sopra, per piccole modifiche il gioco potrebbe non valere la candela (ancora, siamo in contraddizione con quello che vogliamo raggiungere come obiettivo)
  2. Spesso gli applicativi sono più potenti di quello che ci serve. In generale gli applicativi che si acquistano sono concepiti per essere generali e coprire più casistiche possibili ed hanno tantissime funzionalità al loro interno (se ci pensate è ovvio poiché solamente in questo modo si può allargare la fascia di mercato a cui vendere il prodotto). Sono ottimi per come sono stati concepiti non c'è dubbio, ma con ogni probabilità voi utilizzerete solamente una parte limitata di queste funzionalità. Per questo motivo, ogni volta che ne acquistate una porterete all'interno del vostro sistema informativo tante funzionalità software di cui non avete bisogno e che non userete ma che comunque devono essere continuamente aggiornate, mantenute ed organizzate insieme a quelle che usate da parte del produttore. Questo dipendenza indiretta verso funzionalità a voi non utili vi potrebbe creare problemi nel momento in cui ci sono da effettuare gli aggiornamenti.
  3. Molti degli applicativi in commercio purtroppo sono scarsamente integrabili e vincolati ad essere utilizzati su piattaforme ben definite. Nati nei decenni passati si portano dietro impostazioni di creazione del software dove al centro venivano messe le capacità di calcolo e poco la comunicazione con altri applicativi. Per questo motivo i produttori di applicativi che non hanno aggiornato le loro logiche propongono ancora applicativi fondamentalmente vecchi e difficili da integrare con altri se non attraverso sistemi come lo scambio file. Inoltre, molti applicativi sono ancora fortemente accoppiati alla piattaforma sottostante, per esempio è molto comune trovare installati applicativi che funzionano solamente su sistemi operativi di marche specifiche. Nonostante questi applicativi possano essere di ottima qualità, voi utilizzatori finali vi ritroverete con una funzionalità software che vi vincolerà ad acquistare altre componenti software a supporto (es: sistema operativo). In molti casi sarete persino obbligati ad acquistare macchine specifiche con un taglio specifico solo per poterli tenere attivi.
  4. Ogni applicativo si porta con sé i servizi di manutenzione ed aggiornamento ad esso collegati che vengono forniti dal produttore stesso o dal suo rivenditore. Questo significa che ogniqualvolta vi servano delle modifiche specifiche per quelle funzionalità sarete costretti a negoziarle con il fornitore stesso che, qualora si tratti di modifiche particolari che non rientrano all'interno di quelle da lui pianificate, potrebbe persino negarvi la possibilità di poterle realizzare.

Tutti questi elementi fanno sì che il vostro sistema informativo si trasformi in un elefante estremamente lento e poco propenso al cambiamento. Esattamente ciò che non vi potrete permettere pena il fallimento di tutta la strategia industria 4.0.

Un alveare operoso

Se dovessimo tentare di dare un'idea verso cui far tendere il nostro sistema informativo, quella dell'alveare operoso potrebbe certamente essere più vicina alla realtà. Un sistema informativo in ottica industria 4.0 ce lo dobbiamo immaginare infatti come un cantiere sempre aperto e sempre attivo in cui le diverse componenti vengono continuamente aggiornate e continuamente modificate. Ma quali sono le linee guida che dobbiamo seguire per costruire un sistema informatico di questo tipo?
  1. Funzionalità software piccole e facilmente modificabili in tempi brevi
  2. Funzionalità software estremamente facili da integrare una con l'altra
  3. Funzionalità software indipendenti da altre componenti per noi ininfluenti (es: sistemi operativi)
  4. Possibilità di modificare l'architettura in modo semplice
Tecnicamente parlando la maggior parte dei sistemi informativi aziendali è ad oggi abbastanza lontana da questo scenario, tuttavia già oggi è possibile iniziare a costruirsi il proprio sistema "alveare" in casa iniziando a considerare le seguenti linee guida:
  1.  Dal punto di vista infrastrutturale sicuramente il Cloud è la strada maestra da seguire. L'idea della risorsa computazionale intesa come computer che sta nella scrivania va abbandonata. La risorsa computazionale è anch'essa un'entità software. Inutile esitare. Laddove per diversi motivi è richiesta un'infrastruttura privata di risorse computazionali, allora esse dovrebbero comunque essere configurate per essere utilizzate come un sistema cloud privato.
  2. Dal punto di vista della virtualizzazione la strada dei container è sicuramente al momento la più promettente. Essa permette di abbandonare l'astrazione della macchina virtuale per iniziare a ragionare in termini di risorsa computazionale come ambiente minimo e sufficiente all'esecuzione di un applicativo o di una funzionalità software. I container inoltre permettono di spostare le componenti software in modo semplice e veloce in modo indipendente dalle piattaforme cloud sottostanti.
     
  3. Dal punto di vista dello sviluppo applicativo invece la strada dei servizi e dei microservizi è quella che andrebbe seguita poiché consente di realizzare funzionalità mirate più piccole facilmente modificabili e manutenibili. Per cercare di darvi l'intuizione di cosa possa significare un'impostazione dello sviluppo software orientato ai servizi e/o ai microservizi, potreste pensare alle vostre funzionalità software come ad un grande scatolone di mattoncini lego che poi potete comporre gli uni con gli altri per ottenere di volta in volta quello che è più utile per le vostre esigenze.

L'idea finale è che il nostro sistema informativo ci appaia come un insieme di componenti software che comunicano tra di loro e che possono essere connesse, spostate, cambiate e modificate semplicemente in base alle nostre esigenze.


Ogni azienda è una piccola software house

E' chiaro che così come un alveare non produce miele senza api ed uno scatolone di lego non diventa un'astronave senza qualcuno che si preoccupi di darle quella forma, così anche un sistema informativo concepito e costruito per essere molto flessibile ed adattabile non potrà mai dare origine ad alcun risultato sensato senza nessuno che si preoccupi di gestirlo e "programmarlo".

Il sistema informativo deve diventare parte del core business di ogni azienda e non può più essere interpretato come un appendice al servizio di altre attività dell'azienda. Per questo motivo se di core business si tratta, il suo funzionamento dovrà essere conosciuto e controllato dall'azienda stessa. Un'impostazione che mira unicamente a gestire i fornitori esterni non può più essere sufficiente, diventa necessario portarsi in casa una parte di quel lavoro per poterlo gestire con responsabilità in prima persona. Chiaramente non è pensabile l'idea di portarsi in casa qualunque aspetto e funzionalità, ciò sarebbe assurdo, sicuramente però la maggioranza delle funzionalità che hanno a che fare con i processi di business e di relazione con il cliente andrebbero riportate all'interno così da mantenere un effettivo controllo su tutto il sistema informativo.

Questa osservazione porta all'ineludibile conseguenza di iniziare a pensare i propri comparti ICT come delle vere e proprie software house in grado di produrre il proprio software necessario alle proprie esigenze. 


Gestire la transizione

Chiaramente quello che abbiamo descritto qui sopra è solamente il punto di arrivo per cercare di centrare quell'obiettivo di flessibilità e di adattabilità di cui abbiamo parlato. Il nostro consiglio è quello di arrivarci gradualmente compatibilmente con le vostre possibilità senza cercare di strafare. La cosa importante è prendere consapevolezza del percorso da compiere ed iniziare a predisporsi, passo dopo passo, per il suo raggiungimento.  Alcune cose importanti si possono già iniziare a tenere sott'occhio:

  1. Nei limiti del possibile, iniziate la transizione verso il cloud computing ed anche laddove dobbiate lavorare con macchine vostre, cercate comunque di vederle solo come supporto per macchine virtuali. Per chi ha già compiuto questo passo, vale la pena iniziare a valutare la tecnologia a container.
  2. Evitate se possibile tutti i software che non prevedano un'interfaccia utente web (da utilizzare cioè mediante browser) ma che richiedano l'installazione di un applicativo locale alla vostra macchina. A meno che un tale requisito non sia necessario per particolari problemi di sicurezza, evitate questa configurazione il più possibile. La necessità infatti di installare uno specifico applicativo sulla macchina di ogni utente dal punto di vista della gestione è un freno.
  3. Per qualsiasi software che si sta pianificando di acquistare porre tra i requisiti tecnici minimi per la sua accettazione la presenza di moduli che lo abilitino ad essere facilmente integrato (presenza di API REST o Web Service). Assicuratevi che il prodotto già abbia a catalogo tali moduli e cercate di capire bene cosa comporta, in termini di costi e licenze il doverli adottare. Tale requisito dovrebbe diventare uno standard e non un optional, un po' come richiedere che un'auto abbia il volante. In caso tali moduli non siano presenti il nostro consiglio è quello, se possibile, di scartare il prodotto a beneficio di altri che invece hanno queste caratteristiche.
  4. Date priorità a software multi-piattaforma, ossia software in grado di essere lanciati in più ambienti diversi e nei limiti del possibile cercate di evitare quelli che vi obbligano ad un'unica piattaforma di supporto.
  5. Se non lo avete già fatto iniziate ad introdurre software open source all'interno del vostro sistema informativo. Ricordatevi però che questa operazione va fatta considerando sempre la necessità di introdurre poi personale interno in grado di di assumersene la responsabilità e prendersene cura. In alternativa meglio andare su un software commerciale.
  6. Se non lo avete già fatto iniziate un'attività di documentazione del vostro sistema informativo inteso come mappatura delle funzionalità software che avete, i loro costi, i supporti che richiedono ed i dati che si scambiano e che hanno in comune con gli altri software. Iniziate anche ad analizzare i flussi di dati e domandatevi se in molti casi non possiate automatizzare qualche procedura per far fare qualche cosa alle macchine piuttosto che agli esseri umani. Ricordatevi questa massima: i lavori ripetitivi fateli fare alle macchine!
  7. Iniziate a pianificare un graduale aumento dei budget per l'ICT con corrispondente aumento del personale dedicato. Considerate sempre che il vostro team ICT non deve solo produrre software con la testa fissa sulla tastiera ma deve anche sapersi guardare intorno, sapersi aggiornare ed essere esso stesso flessibile ai cambiamenti. Cercate quindi sempre di arricchire il team anche con personalità creative che oltre ad avere le capacità tecniche per comprendere il sistema siano anche in grado di cercare soluzioni nuove e sperimentali. A questo proposito ricordatevi di iniziare a ritagliare una parte del budget per le sperimentazioni. Vi serviranno per permettere al vostro team di essere più confidente con quello che sta facendo e vi serviranno soprattutto per evitare di buttarvi in nuove avventure IT senza sapere bene dove state andando. Fidatevi.
  8. Iniziate a considerare seriamente l'idea di avere un piano di formazione continua per il vostro personale. Ricordatevi che agli esseri umani dovrete affidare due compiti molto importanti: responsabilità e creatività. Per portarli a termine nel migliore dei modi avranno bisogno di essere continuamente aggiornati su quello che sta succedendo intorno a loro. L'idea che un essere umano debba essere competente solamente su un unico aspetto tecnico dell'azienda è per noi definitamente tramontata e certamente non può reggere in un'ottica di industria 4.0, abbiate cura quindi di condividere maggiormente il contesto in cui l'azienda si muove con i vostri dipendenti. L'agire responsabile e creativo nasce da una buona conoscenza del contesto.




July 14, 2017
Posted by Unknown
Testing microservices is a fundamental task in a microservice oriented system and containerization offers a great opportunity for automating it. Containers can be created and connected on demand, thus they provide the perfect environment where performing tests because it can be created and then destroyed when the tests are terminated. Moreover, we could create a testing environment system which is the exact copy of the production one.

In my previous post I showed Jocker, a Jolie component able to interact with Docker by offering a subset of its functionalities as Jolie operations instead of REST ones. In this post I am going to exploit Jocker for automatizing a test on a simple Jolie microservice by orchestrating it from another jolie orchestrator. In order to do that I will use a simple example you can find in the jocker git repo under the folder: ExampleOrchestrator/TestingDBSystem

The system under test
The system under test is very simple and it is just represented by a microservice connected with a PostgreSQL database.



You can find the code of this simple Jolie microservice here. As you can see, this microservice has only one RequestResponse operation called getContent which is in charge to select the field of a row (field2) from table testTable of the DB depending on the value of column field1. Very simple.

Orchestrating the Test
Now, I'll show you how to test the microservice by checking if it properly returns the correct responses when some request messages are sent. In order to do so I use this simple orchestrator for interacting with Jocker which executes all the actions I need. In particular, the orchestrator performs three main activities:
  • Preparation of the testing environment
  • Test execution
  • Removal of the testing environment
Preparation of the testing environment
The main idea is to prepare the testing environment by creating a container for each basic component of the system. Here we have two basic components: the PostgreSQL database and the Jolie microservice.

The PostgreSQL Database container is obtained in the following way:
  1. pulling down the postgresql image from docker hub
  2. creating the corresponding container
  3. starting the container
  4. initializing the database we need by creating it from scratch
  5. initializing the required known data into the database
Steps 4 and 5 could be skipped if we consider to already have a postgresql test image with a pre-installed database initialized with the required data.

On the other hand, the Jolie microservice image can be built by following the same steps explained here. In particular:

  1. a temporary directory is created and all the content of the ServiceToTest folder is copied in. 
  2. a Dockerfile is dynamically created and added to the temporary folder
  3. a tar file of the temporary folder is created 
  4. a docker image of the microservice is created invoking Jocker
  5. a container is created starting from that image
  6. the container is started
The final environment is a system like the following one where the two basic components are now encapsulated within two different containers:




Test execution
The test execution is a very simple phase because the orchestrator just sends all the requests I want to test to the microservice inputPort and checks if the results are like expected. For the sake of this example there is only one request to test but, clearly, they can be hundreds depending on the operations to test and the variety of data to be considered.

Removal of the testing environment
When the test is done, I don't need the test environment any more and I can destroy it. In particular:
  1. I stop the two containers
  2. I remove the two containers 
  3. I remove the two images (in reality, as you can see in the code, I do not remove the postgresql image just because it takes time for pulling it down from the docker hub, but it is up to you).

Some interesting notes
  • In this example we do not exploit the possibility to create links among different docker containers but we directly call the container on the ports they provide. In order to do this, once a container is created we also query Jocker in order to get their info details for extracting the local IP assigned by Docker to them. We will use these IPs as reference hosts for connecting the microservice with the PostgreSQL database and the tests to the microservice.
  • In order to run the example it is sufficient to run the jocker container as described here and the run the orchestrator placed in the folder  ExampleOrchestrator/TestingDBSystem by using the following command

    jolie orchestrator.ol

Conclusions
I hope this post could be of inspirations for all those software engineers who are addressing testing issues with microservices and containers. Let me know if you have questions or doubts.













July 06, 2017
Posted by Unknown
recently we spent time in integrating Jolie and Docker. Our idea was very simple: since Jolie is a very good language for orchestrating microservices in general, why not use it also for orchestrating docker containers??

Thanks to Andrea Junior Berselli who started to work on this topic during his University degree at the University of Bologna, we can now say that a first component able to integrate Docker with Jolie exists! Its name is Jocker [github project]!

How does Jocker work?
Jocker is a jolie microservice which is able to call the REST API of Docker (we implemented only a subset so far) and it offers them as simple jolie operations thus avoiding to deal with all the details related to rest json calls. Here you can see the jolie interface of Jocker. The architecture is very simple:


Jocker must be executed in the same machine where docker server is running. It is communicating by exploiting localsocket //var/run/docker.sock and it will supply jolie operations in the default location localhost:8008 with protocol sodep.

Jocker container
The easy way for running jocker is to pulling down its container and then starting it. The Jocker image is available at jolielang section on Docker Hub and it can easily pulled down by using the followingcommand:

docker pull jolielang/jocker

When pulled down run the following command for executing it:

docker run -d -v /var/run/docker.sock:/var/run/docker.sock -p 8008:8008 jolielang/jocker

Jocker from sources
If you want to run Jocker from sources, you need some extra steps before continuining:
  • you need to install Jolie in your machine 
  • you need to install libmatthew Java libraries in order to enable localsockets.
Running Jocker is very simple, just go into the jocker folder and then type the followin command:

jolie dockerAPI.ol

Jocker listening location can be changed by editing file config.ini.

Jocker Clients
It is very easy to interact with Jocker, just create the following outputPort in your Jolie microservice and use it as usual:

outputPort DockerIn {
    Location: "socket://localhost:8008"
    Protocol: sodep
    Interfaces: InterfaceAPI
}


where InterfaceAPI can be downloaded from here. As an example you can request for the list of all the containers with the following client:

include "console.iol"
include "string_utils.iol"
include "InterfaceAPI.iol"

outputPort DockerIn {
    Location: "socket://localhost:8008"
    Protocol: sodep
    Interfaces: InterfaceAPI
}

main {
    rq.all = true;
    containers@DockerIn( rq )( response );
    valueToPrettyString@StringUtils( response )( s );
    println@Console( s )()
}


In the github repository of the project there are some sample clients you can use for testing Jocker.

Enjoy Jocker and, please, let us know comments and suggestions for improving it.

April 19, 2017
Posted by Anonymous
Here we host a post from Danilo Sorano who collaborated with us and the Jolie team during his training period at Imola Informatica and italianaSoftware. The training period of Danilo is part of his studies at the University of Bologna. Danilo chose to work on a project about Jolie and databases. In particular, he contributed to develop a tool for automatically extracting a jolie service which facilitates the interactions with an existing database in PostgreSQL.

Congratulations to Danilo! His post follows:

JDM
Jolie Database Manager for PostgreSQL database by Danilo Sorano

The jolie database manager is a tool whose goal is to create a facilitator for the management of PostgresSQl data sources by using microservices. The tool can be get on github: https://github.com/jolie/db_connector
The main purpose of JDM is to simplify database management operations, specifically the simplifications of the insertion, modification, deletion and retrieval of data. The tool has been designed for preventing the user to write standard queries on tables and views which can be easily automatized.





Usage instructions:
  1. Creating the database and tables / views. Only if you are creating a database from scratch. Otherwise skip this step. 
  2. Start the JDM server running main_table_generator.ol inside folder server
  3. Configure the information necessary to connect with the database (file config.ini)
  4. Start the client running createDbService.ol inside folder client
  5. In folder db_services all the files will be generated: Metadata extraction from the database and creation of the service for the database.
The database service is divided in two parts:
  • Automatic service
  • Custom service
The automatic part provides basic operations for the database:
  • Create: INSERT query
  • Update: UPDATE query
  • Remove: DELETE query
  • Get: SELECT query
The management of more complex operations such as JOIN between tables must be managed by creating view by the user.

The custom part allows for  the development of customized query on the database and it can be freely edited by the user. This function gives to the user the possibility of being able to create the most complex operations, for example, to manage nested query.

From an architectural point of view, the database service's main part  embeds the custom part and automatic one. This mechanism allows the two parties to be independent of each other, thus allowing the modification of the automatic part every time that, for example, is added to a view without going to change the custom part.

Example: Creation of a database service In order to explain how the tool works we will use the example contained in the folder "examples", where there is a file sql "e-commerce.sql" to be used to generate the test database. The file creates a database of a simple e-commerce with three tables. The three tables are:
  • user (fiscal code, name, surname, email)
  • product (bar code, product name, description, quantity)
  • order (Order id, product id, user id, quantity)
The service can be used by changing the information in the file "config.ini” [db_connection] HOST=localhost DRIVER=postgresql PORT=5432 DATABASE=e-commerce USERNAME=postgres PASSWORD=postgres
Creating a service for a database
  1. Creation of PostgreSql database and its tables and views (It's possible using the e-commerce example.
  2. Starting the Tool for generate the service (main_table_generator.ol)
  3. Change the information in the config.ini file.
WINDOWS
  • Start the script server TableGenerator.bat
  • Start the script client CreateDbService.bat
  • If everything went well the service is created
OTHER OS
  • Go in the folder "server" and run the jolie file "main_table_generator.ol
  • Go in the folder "client" and run the jolie file "createDbService.ol
  • If the creation of the service is successful, the server print the message: “Database table generation is finished with SUCCESS”.
Using the service of the exampleThe generated service for the example of the e-commerce is divided into two parts:
  • automatic
  • custom
The automatic part is represented by the file main_automatic_ecommerce.ol which contains the operations:
  • createuser, createproduct and createorder
  • getUser, getproduct and getOrder
  • updateUser, updateproduct and updateorder
  • removeuser, removeproduct and removeorder.
On the other hand, the custom part as mentioned above, allows the user to create his own customized operations. Example of a clientFile try_operations.ol in the example, shows the usage of the basic operations of the automatic part. The file is structred as it follows:
  1. Importing of the interface and the location
    include"../db_services/e_commerce_handler_service/automatic_service/public/interfaces/includes.iol"
    include "../db_services/e_commerce_handler_service/locations.iol"
  2. Creation of an output port for the service
    outputPort Test {
    Location: e_commerce
    Protocol: sodep
    Interfaces: userInterface, orderInterface, productInterface
    }
    Note that the default location is "socket://localhost:9100" taht is contained inside the file locations.iol
  3. Inside the main you can make the call to the service operations. 
The FilterBefore introducing the individual operations it is better to explain the filter field. The filter field is a FilterType type which is defined inside "automatic_service/public/types/e_commerceDatabaseCommonTypes.iol" type ExpressionFilterType: void { .eq?: bool .gt?: bool .lt?: bool .gteq?: bool .lteq?: bool .noteq?: bool } type FilterType: void { .column_name: string .column_value: any .expression: ExpressionFilterType .and_operator?: bool .or_operator?: bool } The filter field is important to define the where clause of the query:.
  • column_value: value of the colum
  • column_name: name of the column of the expression
  • expression: the operator used to check the value, this is an ExpressionFilterType
  • and_operator: we define this field only if we want to concatenate another expression, in this case an AND operator
  • or_operator: we define this field only if we want to concatenate another expression, in this case an OR operator
The ExpressionFilterType define the operator:
  • lteq: "<="
  • eq: "="
  • gt: ">"
  • lt: "<"
  • gteq: ">="
  • noteq: "!="
As an example, let us suppose to define the following where clause "WHERE 'product_name' = 'Fried Chicken' AND quantita >= 10": getproductRequest.filter.column_name = "product_name"; getproductRequest.filter.column_value = "Fried Chicken"; getproductRequest.filter.expression.eq = true; getproductRequest.filter.and_operator = true; getproductRequest.filter[1].column_name = "quantity"; getproductRequest.filter[1].column_value = 10; getproductRequest.filter[1].expression.gteq = true; Example of a Get operationTo select one or more row in the product table of the e-commerce example we use the operation getproduct getproduct( getproductRequest )( getproductResponse ) throws SQLException SQLServerException It is worth noting that the fields inside the request type is only the filter fields. type getproductRequest:void { .filter*:FilterType } In the response message we will get the rows of the select query type getproductRowType:void { .id_product:int .product_name:string .description:string .quantity:long } type getproductResponse:void { .row*:getproductRowType }This operation select one or more row inside the product table. Here an example of the call: /*Get of the all rows of “Fried Chicken” with the quantity equal or greater then ten*/ getproductRequest.filter.column_name = "product_name"; getproductRequest.filter.column_value = "Fried Chicken"; getproductRequest.filter.expression.eq = true; getproductRequest.filter.and_operator = true; getproductRequest.filter[1].column_name = "quantity"; getproductRequest.filter[1].column_value = 10; getproductRequest.filter[1].expression.gteq = true; getproduct@Test(getproductRequest)( response );Example of a Create operationTo insert a row in the user table we use createuser operation createuser( createuserRequest )( createuserResponse ) throws SQLException SQLServerException - createuserRequest The fields inside the request type are the fields of the user table. type createuserRequest:void { .fiscalcode:string .name:string .surname:string .email:string } - response: createuserResponse it's a void type This operation insert a row inside the user table, to understand better how it’s work we show an example: /*The user John Silver is inserted inside the user table*/ createuserRequest.fiscalcode = "1"; createuserRequest.name = "John"; createuserRequest.surname = "Silver"; createuserRequest.email = "john.silver@mail.com"; createuser@Test(createuserRequest)(); Example of a Remove operationTo insert a row in the user table we use removeuser operation removeuser( removeuserRequest )( removeuserResponse ) throws SQLException SQLServerException - removeuserRequest There is only one filter fields, where specify the condition of WHERE clause type removeuserRequest:void { .filter*:FilterType } - response: removeuseResponse it's a void type This operation remove one or more row inside the user table, to understand better how it’s work we show an example: /*In this case we remove the user John Silver*/ removeuserRequest.filter.column_name = "surname"; removeuserRequest.filter.column_value = "Silver"; removeuserRequest.filter.expression.eq = true; removeuser@Test(removeuserRequest)(); Example of an Update operationTo update one or more rows in the user table we use updateuser operation updateuser( updateuserRequest )( updateuserResponse ) throws SQLException SQLServerException - updateuserRequest The fields inside the request type are the fields of the user table and filter field type updateuserRequest:void { .fiscalcode?:string .name?:string .surname?:string .email?:string .filter*:FilterType } - response: updateuserResponse it's a void type This operation update one or more row inside the user table, to understand better how it’s work we show an example: /*Changing the surname of John from “Silver” to “Smith”*/ updateuserRequest.surname = "Smith"; updateuserRequest.filter.column_name = "surname"; updateuserRequest.filter.column_value = "Silver"; updateuserRequest.filter.expression.eq = true; updateuser@Test(updateuserRequest)(); Future features
  • Ability to integrate the service, for more Database (Postgres, MySql ecc...).
  • Possibility to extend the management of most types and especially more complex.
Contact
  1. C


April 18, 2017
Posted by Unknown

As we described here, with Jolie we are pioneering the linguistic approach for dealing with microservices. Our idea is that microservices are introducing a new programming paradigm which can be crystallized within a programming language. The focus of this post is about the definition of microservice starting from our point of view: a linguistic point of view.

A service is the single unit of programmable software
In the last years the concept of service has been investigated in the area of Service Oriented Computing and several definitions have been provided for defining service contracts, service providers, service discovery and so on. All these definitions are quite abstract because services have been conceived to be technology agnostic both in the case of SOA and microservices. Such a fact means that it is possible to develop a service in any given technology. They say that services are technology agnostic.

Technology agnosticism is a very important feature which allows us to engineer a software system independently from any technology lock-in. But our purpose here is to give the definition of a service as a single unit of programmable software which cannot be fragmented in sub-parts. For this reason, you will find that all the definitions I am going to provide here are strongly related to a specific technology: Jolie. If you like to see how we chose to model the service oriented programming paradigm in a single language you can continue to read, otherwise you can skip this post. If we are wrong, or we are missing some points, or you know other technologies which match the definitions please write us your feedbacks. We are very exited to share ideas on this open topic.

As a starting point, let me explain the first assumption we made in the linguistic paradigm: the service is the single unit of programmable software. Usually, a service is always obtained by programming a server (it is not important if it is simple or not) joint with some business logic which represent the functionalities to serve:

                             SERVER + BUSINESS LOGIC = SERVICE

In a linguistic paradigm such an equation is not more valid just because servers do not exist. Only services exist. It is not possible to program a server because you can program only services. So, forget servers (do not confuse with serverless, it is a different approach). So, if there are no servers but only services, what is a service? As it happens for Object Orientation where classes are logical definitions and objects are the instances of classes in a running environment, let me call the logical definition of services with the term service and its running instance with the term microservice.

                                     SERVICE --> MICROSERVICE

For each service there could be more than one microservices, but each microservice is just the running instance of a service. The service is the single unit of programmable software. In the following I am going to build the definition of a service by giving some qualities it has to provide. At the end of this post I'll give the definition of service.

Services exchange messages
The only way for exchanging data among services are messages. There are no other way. A message is just a limited portion of data transmitted in a limited portion of time. A service can both receive messages and send messages. In a SOA a service which receives and sends messages is usually called orchestrator. Such a difference in the linguistic paradigm does not exist, an orchestrator is just a service. In particular, in Jolie message exchange is enabled by means of ports. Messages are received by means of inputPorts and they are sent by means of outputPorts. Similar constructs are used in WS-BPEL, they are called partnerLinkTypes.

Services can have a behaviour
The behaviour defines the business logic to be executed when a message is received.  In the behaviour it is possible to compute the received data and/or send messages to other services. In Jolie the behaviour is expressed in scope main. A behaviour can define different operations. Different business logics can be joint to different operations. In Jolie multiple operations can be expressed in the same behaviour by using the non deterministic operator:

main {
[ op1( req1 )( res1 ) {
    businessLogic1
}]  

[ op2( req2 )( res2 ) {
    businessLogic2
}]  

...

[ opn( reqn )( resn ) {
    businessLogicn
}]  
}

An operation must always express a finite computation in time. In other words, when triggered an operation must always reach an end state. I say that an operation is divergent if its behaviour defines an infinite computation. Jolie allows for the definition of divergent operations by defining infinite loops:

divergentOperation( request )( response ) {
    while( true ) {
           nullProcess 
    }
}

Divergent operations are deprecated in Jolie.

Services declare interfaces
The operations of the behaviour must be declared in a machine readable interface. The interface defines all the available operations of a given service. Interfaces are used as a concrete contract for interacting with a service. In Jolie interfaces are also equipped with message type declarations.

type MyRequestType: void {
    .msg: string
}

type MyResponseType: void {
    .resp_msg: string
}

interface myInterface {
RequestResponse:
     myOperation( MyRequestType )( MyResponseType )
}

Services execute sessions
Services execute sessions for serving requests. A session is a running instance of a service operation. Sessions are independently executed and their data are independently scoped. If we suppose to send three messages to a microservice which implements the following service we will receive three different messages for each request message.

main {
   test( request )( response ) {
      response = request
   }
}

if we concurrently send three request messages with content "1", "2", "3", we will receive three related reply messages with the same content.


A definition of service

Here I try to summarize a definition of service starting from the basic concepts highlighted above. Jolie provides more linguistic elements w.r.t. those highlighted here. Maybe more basic concepts could be added to the definition and other should be removed. This is just a starting point for trying to investigate microservices from a linguistic point of view. All the contributions are encouraged!

A service is a unit of programmable software able to exchange messages with other services whose behaviour is triggered by incoming messages and it is defined by a set of finite computation logics called operations which are declared within a machine readable interface. A running instance of a service is called microservice whose inner instances of triggered operations are called sessions. Sessions are executed independently and their variables are independently scoped.









December 29, 2016
Posted by Anonymous
Il 20 Dicembre si è svolto a Bologna il Meeting on Microservices (MoM) organizzato da italianaSoftware al fine di creare un primo momento di incontro tra aziende, pubbliche amministrazioni e mondo della ricerca sul tema dei microservizi. Siamo molto contenti che l'evento abbia avuto successo ed abbia visto la partecipazione di un buon numero di interessati ed una serie di interventi davvero stimolanti. Come prima conclusione a seguito di questa giornata, si può sicuramente affermare che il tema "microservizi" è ad oggi un tema di assoluto interesse e che tutti gli addetti ai lavori sono intenzionati a conoscere e a comprendere maggiormente le opportunità ed i rischi che questo nuovo approccio architetturale può offrire.


L'elemento positivo della prima edizione del MoM è certamente quello di essere riuscito a mettere insieme punti di vista sia teorici ed accademici che casi pratici d'uso di quelle aziende che per prime hanno implementato soluzioni orientate ai microservizi. La cosa probabilmente più rilevante che qui ci preme mettere in risalto è come questa soluzione architetturale possa essere adottata da aziende di tutte le dimensioni per risolvere i problemi più disparati. Le presentazioni di Monrif ed H2B hanno illustrato come si possano gestire con i microservizi sia i flussi di archiviazione documentale che l'implementazione di una vera e propria piattaforma B2B. Sicuramente va sottolineato come la loro adozione necessita di ripensare alcuni dei processi esistenti per la progettazione e la programmazione del software e che tante sfide si aprono ora nel mondo IT a seguito dell'adozione dei microservizi. Da una parte l'analisi ed il design dell'architettura diventano passi fondamentali per tutti i team IT che intendono adottare i microservizi e dall'altra molte attività che vengono generalmente esternalizzate possono ora essere riportate all'interno producendo da una parte risparmio e dall'altra una risposta più celere alle esigenze del business.

Il dibattito sui microservizi è appena entrato nel vivo e nei prossimi anni siamo certi che si riparlerà a più riprese di essi. Noi, come italianaSoftware ci saremo e lavoreremo affinchè il MoM possa diventare un evento periodico di carattere nazionale che possa diventare un riferimento per tutti coloro che sono interessati all'argomento.




Di seguito riportiamo l'elenco delle presentazioni con un brevissimo riassunto sui loro contenuti.

Genesi di una tecnologia, dalla ricerca all'industria... - Maurizio Gabbrielli (DISI - Università di Bologna)
Innovare è una sfida sempre difficile dove la possibilità di sbagliare deve essere considerata sin dall'inizio. Lo sviluppo e la maturazione della tecnologia Jolie sono un caso di studio interessante dove dallo sviluppo di un modello matematico si è riusciti a passare ad un linguaggio di programmazione che porta benefici al mondo produttivo.



La rivoluzione dei microservizi - Claudio Guidi (italianaSoftware)
I microservizi promettono di rivoluzionare il modo di progettare e programmare software. italianaSoftware propone una tecnologia specifica per affrontare la programmazione dei microservizi: jolie. Essa si distingue perché propone di approcciare i microservizi tramite un paradogma linguistico piuttosto che tramite un paradigma sistemistico che è quello più conosciuto fino a questo momento.



Implementazione di una soluzione a microservizi: benefici organizzativi ed economici - Balint Maschio (Monrif s.p.a.)
Monrif s.p.a. ha iniziato ad utilizzare soluzioni a microservizi inizialmente per risolvere i problemi di system integration legati all'archiviazione documentale da ambiente SAP verso applicativi di terze parti. Ora ha iniziato a creare veri e propri flussi tramite micro-orchestratori dedicati.


Industria 4.0, come verrà rivoluzionata l'industria italiana - Paola Perini (Innovami)
Industria 4.0 è la nuova parola chiave con la quale si vuole descrivere il processo già in atto di rivoluzione del settore industriale italiano dove "ogni azienda si dovrà trasformare anche in un'azienda software".



H2B e i microservices: un caso di successo - Alessandro Suzzi (H2B)
I microservizi hanno aiutato H2B nel realizzare sia il loro applicativo ad agenti che i portali web B2B per la gestione del catalogo prodotti e degli ordini per un insieme di aziende nel settore delle ferramente gestite da H2B.

Dalle Service Oriented Architectures (SOA) ai microservizi - Claudio Bergamini (Imola Informatica)
Service Oriented Architectures e microservizi vengono spesso accoumnati e confrontati tra di loro. Il tema è sicuramente interessante, entrambe posano le proprie radici sull'idea di servizio come componente seppure con differenze implementative. Di certo c'è che sarà l'analisi specifica dei diversi scenari di utilizzo a stabilire dove sia meglio adottare l'una o l'altra soluzione.

Devops, Cloud e Container - Luca Acquaviva (Imola Informatica)
L'introduzione dei microservizi e la gestione dei loro container in particolare, porta a dover ripensare la propria infrastruttura ed i propri processi Devops. La stessa architettura di un applicativo web può arrivare a cambiare radicalmente.

Microservizi, scenari del prossimo e del lontano futuro - Saverio Giallorenzo (DISI - UNiversità di Bologna)
I microservizi sono con ogni probabilità il trampolino di lancio verso scenari futuri estremamente interessanti da esplorare. Tra le diverse ramificazioni a cui essi possono portare c'è sicuramente quello delle coreografie attualmente esplorato all'interno dell'Università di Bologna.
October 12, 2016
Posted by Unknown
Jolie is now available on Docker and now it is possible to develop and run a microservice inside a container.

But what about the deployment of a microservice in Docker? How can we build a deployable docker container which includes the microservice we are working on?

Actually, it is a very easy task. It is sufficient to develop the microservice by following some simple rules and your docker image will be ready in few seconds!

Rule 1 : you need a Dockerfile for building an image of your microservice
First of all, create a file named Dockerfile in your working directory and write the following lines:

FROM jolielang/jolie-docker-deployer
MAINTAINER SURNAME NAME


where SURNAME, NAME and EMAIL must be replaced with the maintainer's surname, name and email respectively. The Dockerfile will be used by Docker for building the image related to your microservice. As you can see, the image you are creating is layered upon a previously created image called jolielang/jolie-docker-deployer.
You can find this image in the docker hub of jolielang here. Such a docker image, is prepared for facilitating the deployment of a jolie microservice as a docker image. In order to use it in the right way, just follow the next rules. As an example, let me suppose to deploy the following microservice saved in file helloservice.ol:

interface HelloInterface {
RequestResponse:
     hello( string )( string )
}

execution{ concurrent }

inputPort Hello {
Location: "socket://localhost:8000"
Protocol: sodep
Interfaces: HelloInterface
}

main {
  hello( request )( response ) {
        response = request
  }
}


Rule 2 : EXPOSE inputPorts ports
Remember that all the inputPorts of your microservice must be reached from outside the container, thus you need to expose them in the Dockerfile.



In the example the inputPort is located at localhost:8000, thus we need to add EXPOSE 8000 in the Dockerfile. So, your Dockerfile now becomes like this:

FROM jolielang/jolie-docker-deployer
MAINTAINER SURNAME NAME

EXPOSE 8000

Rule 3 : COPY the files of your project and define the main.ol
Now, everything is quite done for preparing the image, we just need to copy the files of the project in the docker image. When doing this, pay attention to rename the file which must be run with the name main.ol.

FROM jolielang/jolie-docker-deployer
MAINTAINER SURNAME NAME

EXPOSE 8000
COPY helloservice.ol main.ol

Building your image

When the Dockerfile is ready we can build the docker image of the microservice. In order to do this you just need to run the following command within your working directory which also contains the Dockerfile.

docker build -t hello .

where hello is the name we give to the image. Once it is finished, you can easily check the presence of the image in the local registry by running the following command:

docker images

Running a container
Now, starting from the image, you can run all the containers you want. A container can be run by launching the following command:

docker run --name hello-cnt -p 8000:8000 hello

where hello-cnt is the name we give to the container. Note that the parameter -p allows you to map the microservice port (8000) to the port 8000 of your localhost. You can check that the container is running just launching the following command which lists all the running containers:

docker ps

Your microservice is now deployed and it is listening for requests at port 8000. You can just try to invoke it with a client like the following one. Remember to launch the client in a separate shell of your localhost!



include "console.iol"

interface HelloInterface {
RequestResponse:
     hello( string )( string )
}


outputPort Hello {
Location: "socket://localhost:8000"
Protocol: sodep
Interfaces: HelloInterface
}

main {
  hello@Hello( "hello" )( response );
  println@Console( response )() 
}



Advanced settings
So far, we have deployed a very simple service but, usually we deal with microservices that are more complicated than the hello service presented before. In particular, it is very common the case where some constants or outputPort locations must be defined at deploying time. In order to show this point, let me now consider the following service:

interface HelloPlusInterface {
RequestResponse:
     helloPlus( string )( string )
}

interface HelloInterface {
RequestResponse:
     hello( string )( string )
}

execution{ concurrent } 

constants {
   CUSTOM_MESSAGE = " :plus!"


outputPort Hello {
Location: "socket://localhost:8000"
Protocol: sodep
Interfaces: HelloInterface
}

inputPort HelloPlus {
Location: "socket://localhost:8001"
Protocol: sodep
Interfaces: HelloPlusInterface 
}

main {
  helloPlus( request )( response ) {
        hello@Hello( request )( response );
        response = response + CUSTOM_MESSAGE
  }


This is a very simple microservice which has a dependency on the previous one. Indeed, in order to implement its operation helloPlus, it requires to invoke the operation hello of the previously deployed microservice. Moreover, it uses a constants CUSTOM_MESSAGE for defining a string to be added to the response string.




Usually, we would like that some of these parameters can be defined at deploying time because they directly deal with the architectural context where the microservice will run. Thus, we would like to create an image which is configurable when it is run as a container. How can we achieve this?

Rule 4 : Prepare constants to be defined at deploying time
The image jolielang/jolie-docker-deployer we prepared for deploying microservice has been built with some specific scripts which are executed before running the main.ol. These scripts just read the environment variables passed to the docker container and transform them in a file of constants which must be read by your microservice. The most important facts we need to know here are:
  1. Only the environment variables prefixed with JDEP_ will be processed
  2. The processed environment variables will be collected in a file of constants  named dependencies.iol. If it exists it will be overwritten.
From these two points we change the microservice as it follows:
 

interface HelloPlusInterface {
RequestResponse:
     helloPlus( string )( string )
}

interface HelloInterface {
RequestResponse:
     hello( string )( string )
}

include "dependencies.iol"

execution{ concurrent } 

outputPort Hello {
Location: JDEP_HELLO_LOCATION
Protocol: sodep
Interfaces: HelloInterface
}

inputPort HelloPlus {
Location: "socket://localhost:8001"
Protocol: sodep
Interfaces: HelloPlusInterface 
}

main {
  helloPlus( request )( response ) {
        hello@Hello( request )( response );
        response = response + JDEP_CUSTOM_MESSAGE
  }
}

As you can notice here there are two constants JDEP_HELLO_LOCATION and JDEP_CUSTOM_MESSAGE which require to be defined at the start of the microservice. They must be defined in the file dependencies.iol which MUST BE included in your microservice. This file just contains the declaration of the two constants.

constants {
JDEP_HELLO_LOCATION = "socket://localhost:8000",
JDEP_CUSTOM_MESSAGE = " :plus!"
}

During the development keep this file in your project and collect here all the constants you want to define at deploying time. When the service will be run in the container this file will be overwritten thus, you don't need to copy it into the docker image.

The Dockerfile of the helloPlus service is very similar to the previous one:

FROM jolielang/jolie-docker-deployer
MAINTAINER SURNAME NAME

EXPOSE 8001
COPY helloservicePlus.ol main.ol
We can create the image with the same command used before, but where the name is hello-plus.

 docker build -t hello_plus .

Configuring the container
Now, we just need to know how to pass the constants to the running container and everything is done. Docker allows to pass environment variables with the parameter -e available for command run. Thus the command is:

docker run --name hello-plus-cnt -p 8001:8001 -e JDEP_HELLO_LOCATION="socket://172.17.0.4:8000" -e JDEP_CUSTOM_MESSAGE=" :plus!" hello_plus

where hello-plus-cnt is the name we give to the container. Note that the constant JDEP_HELLO_LOCATION is set to  "socket://172.17.0.4:8000" where the IP is set to 172.17.0.4. It is just an example, here you need to specify the IP that Docker assigned to the container hello-cnt which is executing the service hello.ol. You can retrieve it just launching the following command:

docker inspect --format '{{ .NetworkSettings.IPAddress }}' hello-cnt

Once the hello-plus-cnt container is running, you can simply invoke it with the following client:

include "console.iol"

interface HelloPlusInterface {
RequestResponse:
     helloPlus( string )( string )
}

outputPort HelloPlus {
Location: "socket://localhost:8001"
Protocol: sodep
Interfaces: HelloPlusInterface
}

main {
  helloPlus@HelloPlus( "hello" )( response );
  println@Console( response )()
}


Conclusion
In this post I show how it is possible to deploy a microservice developed with Jolie as a docker container. The procedure is very easy, just pay attention to inputPorts and the constants you want to configure at deploying time. For all the other things you can just rely on the Jolie language. Don't forget that you can also exploit embedding for packaging more microservices into one, thus deploying all of them inside the same container if necessary.

Enjoy!

September 05, 2016
Posted by Unknown
This post is about a couple of tools [https://github.com/jolie/jester] we developed for facilitating the programming of REST microservices with Jolie. Personally I am not a big supporter of REST services, but I think that a technology which aims at being a reference in the area of microservices like Jolie must have some tools for supporting REST services programming. Why? Because REST services are widely adopted and we cannot ignore such a big evidence.

Ideally, Jolie as a language is already well equipped for supporting API programming also using http, but REST approach is so deep coupled with the usage of the HTTP protocol that it introduces some strong limitations in the service programming paradigm. Which ones? The most evident one is that a REST service only exploits four basic operations: GET, POST, PUT and DELETE. The consequence of such a strong limitation on the possible actions is that the resulting programming style must provide expressiveness on data. This is why the idea of resources has been introduced in REST! Since we cannot programming actions we can only program resources.

Ok,  let's go with REST services!

...But,
...here we have a language, Jolie, that is more expressive than REST because the programmer is free to develop all the operations she wants. From a theoretical point of view, in Jolie she can program infinite actions instead of only four!



- Houston we have a problem! We need to put infinite operations inside just four!!!

- Why only four when we can have infinite?

- This does not matter now Houston, we have to do it!



Ok no problem! Follow our instructions!
First of all, take note that here we want to preserve all the benefits of using Jolie, thus the possibility to develop all the operations we want, but finally publish the microservice as a REST service. We achieve such a result by exploiting a specific microservice architetcure which is a composition of a router and the target microservice we want to publish as a REST one. The router, as described in Fabrizio's paper [https://arxiv.org/abs/1410.3712], is in charge to transform REST calls into standard Jolie calls.



Ok, in order to exaplain how to proceed, let us consider as a target microservice the demo one reported in the jester project [https://github.com/jolie/jester/tree/master/src/jolie/tools/demo]. It is a very simple service which emulates a manager of orders which supplies four operations: getOrders, getOrdersByItem, putOrder, deleteOrders.

Ok, there are four operations but it is just an example, we could have more than four operations :-). The main question now is:

What we have to do for transforming these operations in REST operations?
We need to use the jolie2rest.ol tool you can find the jester project. Such a tool analyzes the interface of the demo service and it extracts a descriptor for the router which enables it to publish the demo service as a REST service.

Very simple! But before running the tool we need to know something more. We need to define how to convert the single operations of the demo microservice will be tranformed into the REST one. In particular, fro each target operations we need to specify if we want a GET operation, a POST operation, a PUT operation or a DELETE operation. It is possible to provide these instructions, directly into the interface of the demo service by exploting the annotation @Rest into the documentation comments. As an example let us consider the operation getOrders:

/**! @Rest: method=get, template=/orders/{userId}?maxItems={maxItems}; */
getOrders( GetOrdersRequest )( GetOrdersResponse )


The annotation defines that the operation getOrders it must be transformed into a GET http method and the URL template that must be adopted is /orders/{userId}?maxItems={maxItems}. What is the template URL?
Since the REST services deal with resources, the URL is used as a means for expressing the resource we want to access. Here we use the template as the means for transforming a call to an operation of the target service into a resource. In particular, the parameters between curly brackets will be directly filled by using the corresponding values of the request message node which is defined in the GetOrdersRequest type:

type GetOrdersRequest: void {
    .userId: string
    .maxItems: int
}


Now, we can proceed by running the following command:

jolie jolie2rest.ol localhost:8080 swagger_enable=true

where localhost:8080 is the location where the router is deployed and the parameter swagger_enable specifies if we want to enable the creation of the swagger json descriptor file.
Note: the file service_list.txt contains the list of the target microservices to be transformed and the related inputPorts to be converted (more instructions can be found here: ref).

As a result the tool will generate two files:
  • router_import.ol
  • swagger_DEMO.json
The former file must be copied in the router folder and it contains the script which enables the router to transform the REST calls into the operations call of the target service. The latter is just the swagger descriptor to be provided in input into a SwaggerUI. You don't have a local SwaggerUI available? Follow these instructions to have it locally, otherwise go to point 6:



  1. Prepare the web server of the SwaggerUI application by downloading Leonardo from here [https://github.com/jolie/leonardo]
  2. Go to this SwaggerUI URL [https://github.com/swagger-api/swagger-ui/archive/v2.2.3.zip] and download the related web application project. 
  3. Copy the content of the folder dist of the SwaggerUI project inside the folder www of Leonardo.
  4. Open a shell and run jolie leonardo.ol
  5. Open a browser at the url http://localhost:8000, the SwaggerUI web application should appear.
  6. Copy the swagger_DEMO.json file into the folder where it is reachable from the SwaggerUI, in the Leonardo scenario put it inside the www folder.
  7. Write in the explorer bar of the SwaggerUI the URL for reaching the swagger descriptor, in the Leonardo case write: http://localhost:8000/swagger_DEMO.json
After this step, you should see the Swagger definition of your target demo service transformed into a REST one. In particular, you should see something like this:







As you can see the four operations of the target service have been transformed in the four different types of REST operations. Have a look to the interface annotations of the demo service for finding the matches with the Swagger interface!

Nice, but what happen if I want to transform more than one Jolie microservices instead of a single one?
This question is dealing with an architecture like the following one where the router is connected with several microservices:



There are no particular problems in achieving such a result. It is sufficient to list all the target microservices with the related input ports in the file service_list.txt and re-launch the jolie2rest tool!

Ok Houston! Here we are! But is it possible to avoid the usage of the router?
Yes, it is possible. But only if you accept to not completely adhere to the REST approach. A jolie microservice can be directly published by using a http inputPort with message format set to json [http://docs.jolie-lang.org/#!documentation/protocols/http.html#http-parameters]. In this case the microservice will be able to serve the http requests without requiring any router or proxy in the middle. If you want this, change the inputPort protocol of the demo service to a http one and then use the jolie2rest tool with the parameter easy_interface set to true.

jolie jolie2rest.ol localhost:8080 swagger_enable=true easy_interface=true

In such a case the router_import.ol file is not generated but only the swagger_DEMO.json one. The operations are all converted in POST methods and the URLs do not follow the templates but the request json messages must be entirely defined in the body of the message. Try to replace the file swagger_DEMO.json in the SwaggerUI and perform some calls.

Generating client stubs from an existing Swagger definition:
A last tool which can be very useful when integrating existing REST services in a Jolie architecture, is jolie_stubs_from_swagger.ol. Such a tool takes in input an existing Swagger definition and generates a Jolie client for each published api.

As an example you could try it by generating the clients for the petstore example supplied by the Swagger project [http://petstore.swagger.io/v2/swagger.json]. In order to do it, create a target folder where you want to store all the generated clients (for example petstoreFolder) and then run the following command:

jolie jolie_stubs_from_swagger.ol http://petstore.swagger.io/v2/swagger.json petstoreService petstoreFolder

where petstoreService is the token that will be used for generating the Jolie outputPort name of the petstore service inside the clients. As a result, in folder petstoreFolder you will find a list of Jolie clients. In particular, you will have a client for each api defined in the petstore swagger definition.

If you want to try to send a request just open one of them and create the request message. For example open the getOrderById.ol and prepare the request message by adding the following jolie code:

with( request ) {
.orderId = 8
};

then run the client as a usual jolie script:

jolie getOrderById.ol


the result should be printed out on the console!

You can exploit these clients inside your existing jolie microservices. Just note that the generated file outputPort.iol defines all the information necessary to declare the outputPort to be used. Thus just include it in your microservice project and then make the calls when is more useful for you!

Conclusion
Houston, everything is clear now! :-) With the REST tools described in this article we want to improve the Jolie language providing the possibility to publish Jolie microservices as REST services and giving an easy way for generating clients from existing REST services. We hope this could be useful for your  projects and, please, do not forget to send us your feedbacks and improvements!





March 17, 2015
Posted by Unknown
Wordpress developers could have the problem to integrate the website they are developing with some extra data which come from external resources. Data synchronization could be not easy, it must be scheduled, data are visible on the web application with a period equal to the synchronization timeout and it usually requires a level of maintenance that is high with respect to the amount of synchronized data.

In these cases it could be useful to synchronously call an external server in order to retrieve the data we need. Jolie could be very helpful for developing a fast API server to be integrated within your web site.

This short post describes a way for easily integrate Wordpress with a Jolie service.

First of all, let me consider the simple case of getting a list of addresses from a Jolie service. This service could be written as it follows:

type GetAddressRequest: void {
    .name: string
    .surname: string
}

type GetAddressResponse: void {
    .address: string
}

interface ServerInterface {
    RequestResponse:
        getAddress( GetAddressRequest )( GetAddressResponse )
}


execution{ concurrent }

inputPort AddressService {
    Location: "socket://localhost:9001"
    Protocol: http
    Interfaces: ServerInterface
}

init {
    with( person[ 0 ] ) {
        .name = "Walter";
        .surname = "White";
        .address = "Piermont Dr NE, Albuquerque (USA)"
    };
    with( person[ 1 ] ) {
        .name = "Homer";
        .surname = "Simpsons";
        .address = "Street 69, Springfield USA"
    };
    with( person[ 2 ] ) {
        .name = "Sherlock";
        .surname = "Holmes";
        .address = "221B Baker Street"
    }


}

main {
    getAddress( request )( response ) {      
        index = 0;
        while( index < #person
                && person[ index ].name != request.name
                && person[ index ].surname != request.domain ) {
            index ++
        };
        if ( index == #person ) {
            throw( PersonNotFound )
        } else {
            response.address = person[ index ].address
        }
    }

}


In this case I simulated a database with a simple vector of persons. The service is located in socket://localhost:9001 and you can easily invoke him through a browser by using this url: http://localhost:9001/getAddress?name=Sherlock&surname=Holmes

Clearly, this is an example which runs on your local machine, it is simple to export it in a distributed scenario by simply running Jolie on the remote machine, let me suppose it is located at IP X.X.X.X.

When the Jolie service is executed, we can invoke it within a wordpress page. In order to do this, simply add the following PHP code on the page where you want to show the results:

$data = array(
    'method' => 'POST',
    'timeout' => 45,
    'httpversion' => '1.0',
    'blocking' => true,
    'headers' => array(),
    'body' => array( 'name' => 'Sherlock', 'surname' => 'Holmes' )
    );

$url = 'http://X.X.X.X:9001/getAddress';
$result = wp_remote_post( $url, $data );

print_r( $result );

if( is_wp_error( $result ) ) {
echo $result->get_error_message();
}


Where $result contains the response message.

You can also exploit a JSON format message by adding the following parameter to the http protocol of the Jolie service and re-arranging the PHP code for managing JSON messages.

 inputPort AddressService {
    Location: "socket://localhost:9001"
    Protocol: http { .format="json" }
    Interfaces: ServerInterface
}


Moreover you could connect the Jolie service to a database for retrieving the data you need, check the Jolie tutorial page for getting how to use databases in Jolie.



February 03, 2015
Posted by Unknown
Recursion in service oriented architectures is something unusual because it seems to be useless. So, why a post about recursion in SOA? I think the big result we can obtain from service oriented recursion is understanding service oriented programming paradigm nature and technologies. Recursion indeed, is a well known programming pattern hugely adopted by all the programmers in the world and its usage could reveal us some interesting features about a programming language. So, I want to use recursion in a SOA because I want to know more about SOA. Let's do it.

In the following example you can find the implementation of Fibonacci with Jolie.
[ You can copy it into a file and save it with name fibonacci.ol. Then run it typing the command
jolie fibonacci.ol ]


include "console.iol"
include "runtime.iol"

execution{ concurrent }

interface FibonacciInterface {
  RequestResponse:
    fibonacci( int )( int )
}

outputPort MySelf {
  Location: "socket://localhost:8000"
  Protocol: sodep
  Interfaces: FibonacciInterface
}


inputPort FibonacciService {
  Location: "socket://localhost:8000"
  Protocol: sodep
  Interfaces: FibonacciInterface
}

main
{
  fibonacci( n )( response ) {
    if ( n < 2 ) {
      response = n
    } else {
      {
fibonacci@MySelf( n - 1 )( resp1 )
|
fibonacci@MySelf( n - 2 )( resp2 )
      };
      response = resp1 + resp2
    }
  }
}




The code is very intuitive and simple. The outputPort MySelf defines the external endpoint to be invoked which corresponds to the input endpoint FibonacciService where the operation fibonacci is deployed. The invocations are performed in parallel (operator |) and they are blocking activities which wait until they receive a response from the invoked service
. You can invoke it by simply exploiting the following client which sends 10 as input parameter:

include "console.iol"

interface me {
  RequestResponse:
    fibonacci( int )( int )
}

outputPort Service {
  Location: "socket://localhost:8000"
  Protocol: sodep
  Interfaces: me
}

main
{
  fibonacci@Service( 10 )( result );
  println@Console( result )()
}

Starting from this simple example we can understand some interesting features:

server sessions represent recursive call stack layers: each invocation opens a new session on the server which represents a new layer in the recursive call stack.

loosely coupling: each invocation is separated from the others which guarantees that the value of n is not overwritten by different calls.

runtime outputPort binding: the binding of the output endpoint (MySelf)  must be achieved at runtime and not at deploy time in order to avoid frustrating programming issues like those reported here:  http://blogs.bpel-people.com/2007/02/writing-recursive-bpel-process.html.

the invocation stack could be distributed: you can imagine to deploy more than one fibonacci service and switch the invocations from one to another depending on some inner parameter such as, for example, the number of the open sessions. As an example consider the code modified as it follows:

init {
  getLocalLocation@Runtime()( global.location );
  MySelf.location = global.location;
  println@Console( MySelf.location )();
  global.count = 0
}

main
{
  fibonacci( n )( response ) {
    synchronized( lock ) {
      global.count ++;
      println@Console( "begin n="+n )();
      if ( global.count >= 100 ) {
MySelf.location  = "socket://localhost:8001"
      }
    };
    if ( n < 2 ) {
      response = n
    } else {
      {
fibonacci@MySelf( n - 1 )( resp1 ) 

fibonacci@MySelf( n - 2 )( resp2 )
      };
      response = resp1 + resp2;
      synchronized( lock ) {
global.count --;
println@Console( "end n="+n )();
if ( global.count < 100 ) {
 MySelf.location  = global.location
}
      }
    }
  }
}

Here the location of the outputPort MySelf can be changed dynamically during the execution. global.count stored the number of the current open sessions, if it is greater than 100 the location is changed into socket://localhost:8001 where the second fibonacci service is deployed. In this way you can easily create a chain of Fibonacci services whose sessions participate to a unique Fibonacci number recursive calculation.


Conclusions
Here I used service oriented recursion for programming a Fibonacci service with Jolie which can be invoked by external clients. This service could be also chained with other copies of it in order to obtain a distributed SOA for calulating the Fibonacci number recursively. Such an example could be an interesting reference point for understanding how service creation and invocations work in a SOA.


August 21, 2014
Posted by Unknown
Some months ago I read this article about microservices and I found it very interesting because of the experience we have had so far in the development of Jolie language. Jolie was born as a language for crystalizing SOA principles in a specific domain language but, day by day, we were becoming more and more confident that the boundaries of SOA were not enough for dealing with some issues that we had encountered during programming with Jolie. Thus, I was very surprised and excited when I read about microservices because I think they enlarge the domain  of service oriented programming in a way which is coherent with the results we have had so far. This is why I would like to share our experience with the microservices community. On the one hand my aim is to promote the usage of Jolie as one of the technologies which could help in the design and the development of microservices, and on the other hand I hope to give my contribution in the definition of concepts and practices for microservices. In particular, in this post, I would like to introduce a simple concept called Service Threshold which could be of help when reasoning about service and microservice componentization. The main idea behind the service threshold is to provide an abstract boundary which allows for the separation among a world of services and a world of components, where a component is a general piece of software.

The Service Threshold
The Service Threshold is the architectural line which separates services from other components (that are not services) as shown In Fig 1. where I simply represent services as hexagons and other components as squares.

Fig 1. The Service Threshold abstractly separates services (hexagons) from 
components that are not services (squares).

It is a very simple concept but some questions need to be addressed:

Why do I need to introduce it?
A line always represents a division, in this case a logical division between two kinds of things that we consider different. I introduce it because it helps me to change my perspective about distributed software architectures. Indeed, I need to think of them in a way which allows me to design, deploy and manage them quickly and easily. Summarizing, I need that line because I am looking for a simplification of distributed and heterogeneous systems engineering. 

Are there different properties above and under the service threshold?
Clearly, there are differences between the top plane and the bottom one. If I operate "above" the line I would like to be in a world where a service is the basic element which can be composed and managed by following some specific rules related to services, no other kind of software components exist. Otherwise, if I am "under" the line, the service is just something I have to create, it is the target of my work.

Which are the differences between a service and component?
A service is an autonomous running piece of software able to provide its functions when invoked. Its execution does not depend on components out of those strictly required.  The service functions should be always available for invocations, if not the service should reply with a specific fault message. Invocations are always achieved by means of message exchanges which could be synchronous or asynchronous. It does not matter the transport protocol used for the message passing. A service can be stateless or stateful.
A component is a piece of software which cannot be considered as a service.

Which is the right place to put the threshold line?
This is the core question every distributed software architect should deal with. Depending on the answer, I would design a different system with different properties. If I put this line very high I will have few services and a lot of components, if I put this line at a low level I will have a lot of services and few components. So, the question could be changed in: how many services do I need? Again, which are the basic properties that I need to take into account in order to identify all the required services?

The service threshold is not a line but a boundary
A distributed system is a system where software artifacts are separated, thus even though I have discussed the service threshold as a horizontal line, it fits better to think about it as a boundary which allows me to isolate and identify the different services involved into a system.


Fig 2. The Service Threshold is a boundary which allows me to isolate and identify the services.

Once defined, the service  threshold allows me to focus on the single service without being worried about its connections too much. I can just limit my effort to the development of the service as an autonomous specific provider of functions. The final target is to provide a basic set of services which represent the core functions of the application I am interested in. Some of them could be bought from a third party supplier, the others could be developed from scratch. The most important think is that each service function must be well separated from the others and, at the same time, it must be strictly necessary to the overall system. As an example, let me consider the human body where each organ supplies a well defined function which is fundamental for life. They are precisely separated by means of different tissues and they are characterized by different biological properties. 

All of them are needed, because the functions that they supply are needed, nothing is too much, nothing is too less. They are connected in a perfect and inexpensive way, because they all participate to form the human body, a more complex and completely different organism. 

Service connections and dependencies
Once all the services have been identified, I can deal with service communication and connections. A communication is performed when a message is sent by a service and received by another one, where a message is a limited set of data transmitted in a limited interval of time. A connection represents the possibility to perform a communication between two services: a sender and a receiver. Usually a service receiver is not aware of the sender (this is not true in the case of a stateful long running transaction). A connection becomes a dependency when the sender needs to invoke another service in order to accomplish its own tasks.
Fig 3. Service A has a dependency on service B.
Service dependencies allow me to divide services in two main categories: 
  • Autonomous services
  • Depending services
Autonomous services usually control resources such as databases, electronic devices, computational resources, legacy systems, etc. These kind of services are the software artifacts which enable the integration of  the underlying resource with a system of services and the APIs that they provide represent the only way for interacting with it. They are usually designed to be independent from the context they will be inserted in and they are strongly loosely coupled.

Depending services are built upon other services because they require to call other services to accomplish their tasks. They could be:
  • Mixed services
  • Pure coordination services
Mixed services control resources as the autonomous ones (e.g. a database) but they also need other information from other services for providing all their functions.
Pure coordination services do not control any data resource but they just call other services for achieving their tasks (orchestrators are usually pure coordination services).
 

Fig.4 Autonomous services, Mixed services and Pure coordination services
By componing these kind of services it is possible to obtain a complex distributed system which provides a new set of functions different from those of the constituent ones. I call service complex system a distributed system whose components are all connected services.

Fig. 5  A service complex system is a distributed system whose components are all connected services.


The service threshold can be applied over a set of services
A service complex system could be formed by tens or hundreds of services. Clearly, when the number of services is very high it could be difficult to manage and maintain the system because of its complexity. In this case, in order to simplify the management, I exploit another interesting property of the service threshold: I apply it over a set of existing services by grouping them. The services grouped by a service threshold can be seen and treated as a single service. In this case all the inner connections are hidden for an external observer and the only ones that are relevant for us are those that can be observed outside the service threshold boundary. 
Fig 5. The service threshold can be applied over a set of services in order to group them
and identify them as a single service.
There are no limits to the application of the service threshold over a set of services, thus I could apply it again and again in order to reduce the system to a limited set of services. Since a service is usually stateless and always ready to serve different clients concurrently, it is possible that the same service could be enclosed by two (or more) different service thresholds as in the following picture:

Fig 6. A service could be enclosed by two different service thresholds. 

In this case the resulting services both contain the same service inside. Is this possible? Yes, in the following section I will answer to this question.

Primary thresholds and abstract thresholds
As I said before a service is always a running piece of code. Thus, a set of files in our file system is not a service, it is just a set of file which could contain executable code but they are not a service. They start to become a service when they run. This is a fundamental point because a service exists only if it is able to respond to our requests. This is why it is possible to double a service by wrapping it into two different service thresholds as represented in Fig 6. It is possible because I am grouping the service functions, not the piece of code. I call abstract service threshold all the service thresholds which group existing services. When I apply the abstract service threshold to a set of services I loose the deployment details of a service (where it is, how it is composed, which technology it uses, etc) because I am focusing only on its functions. On the contrary, the primary service threshold is applied to a set of components that, when executed, they become a service. By definition. the primary service threshold has a completely different nature with respect to the abstract one because it will be applied to components for tting services. Nevertheless, I both call them service threshold because they both allow me to identify a new service even if they are applied to different domains.

Fig 7. The Abstract Service Threshold (on the left) groups existing services, the primary service threshold groups a set of components that, when executed, they become a service.

The primary threshold is a technological threshold
The Primary Threshold is a technological threshold because it clearly defines the boundary between service composition mechanisms and other technologies mechanisms. It does not matter which kind of technology I select, but it is important that I will be able to satisfy some minimum requirements that allow me to create a service (I suppose you would ask me which are the minimum requirements for creating a service, wait wait wait I will discuss them in another post). Within the primary threshold I could exploit any kind of technology and any kind of communication approach (such as in memory data exchanges, file exchange, etc.).  I can also use specific applications such as web servers, application servers, etc. The final target is to prepare the basic software layer which allow me to jump into the service world. Everything happens inside this threshold is a matter strictly related to the technologies I chose for building the service. This point is important because allow me to state that there exists a trade off between the benefits obtained by the introduction of a service and the technological overhead I have to pay. Is it simple to maintain a service developed with a specific technology? Which skills I require for achieving the development? Does the technology easily scale with the service load? Moreover, do I need to adopt the same technology for all the services of our system or do I need to change it? When? Will I be able to maintain the required skills for managing all the existing services I have? These are not trivial issues because more is the number of technologies I adopt, more is the human resource knowledge of my team I have to manage. Indeed, if I adopt a technology I need someone who is able to manage it. 

Conclusions
In this post I discussed service and microservices componentization by introducing the concept of service threshold which is a useful tool for approaching distributed systems based on services and microservices. I hope this post could be useful for those people that are trying to approaching service system design in a simple and intuitive way. Service thresholds could be a useful means for structuring the design and defining the development phases of a service system.  


August 21, 2014
Posted by Unknown
In this post I would like to discuss how a web server should be considered in the context of a Service Oriented Architecture or a distributed system in general. Usually, the web server is used as a tool which enables the possibility to public files and applications  under the HTTP protocol. Indeed, applications are developed by using some specific technology like PHP, Ruby, Java, etc. and then deployed into a web server for making them available on the web. The same approach is adopted also in Service Oriented Architecture where web services and orchestrators are developed in a given technology and then published in a web server together with the WSDL documents.

Here I want to share one of the results we obtained by developing Jolie as service oriented programming language. In Jolie the web server is just a service which provides its operations under an HTTP protocol and it follows the same programming rules we use for simple services and orchestrators. Here you can find the code of Leonardo, which is a web server completely developed in Jolie:

http://sourceforge.net/p/leonardo/code/HEAD/tree/trunk/leonardo.ol

You can download the code here and try to execute it by simply typing the following command in a shell:

      jolie leonardo.ol www/

where  www/ is the root folder where files are retrieved by Leonardo (remember to create it if it is missing). Now try to add a simple index.html file into the www folder like the following one:

<html>
<head/>
<body>
Hello World!
</body>
</html>

Then open your browser and set the url: http://localhost:8000 and you'll see the html page displayed in it. Try also to add images, css, html pages and subfolders into the www/ folder.
You can change the port by setting the Location_Leonardo parameter of the config.iol file. 

But,
          how can I program the server side behavior of a web application?

And here comes the service oriented approach. First of all let me introduce a new operation called test into the HttpInput by modifying its interface as it follows:

interface HTTPInterface {
RequestResponse:
default(DefaultOperationHttpRequest)(undefined),
test
}

Then let me add the implementation of the test operation into the main:

main {
[ default( request )( response ) {
/* ... code of the default operation */
} ] { nullProcess }

[ test( request )( response ) {
format = "html";
response = "Test!"
}] { nullProcess }
}

Operation test executes a very simple piece of code that is:

  • setting the response format to "html" in order to make the browser able to manage the response as an html file
  • set the response message with a simple html page

Now if we relaunch Leonardo and we point the browser to  http://localhost:8000/test we will see the page generated by the test operation. 

This is a pretty standard way for programming a web application: joining a web page to each operation. You can do it but I don't like it, even if in some cases it could be useful. I prefer to completely decouple the server side from the client side. I prefer to public the operation test as a JSON API which can be used by the client using an AJAX call. In this way I can program the server side as a service and the client side as a dynamic application. In order to this I modify the interface for introducing the message types which are very useful for catching message faults:
type TestType: void {
.message: string
}

interface HTTPInterface {
RequestResponse:
default(DefaultOperationHttpRequest)(undefined),
test( TestType )( TestType )
}

Then I modify the implementation of the test operation:

[ test( request )( response ) {
response.message = request.message + " RECEIVED!"
}] { nullProcess }

Finally, I just insert a little bit of Javascript on the client side by modifying the index.html:

<html><head>
  <script type="text/javascript" src="http://code.jquery.com/jquery-2.0.3.min.js"></script>

  <script>
 function jolieCall( request, callback ) {
     $.ajax({
          url: '/test',
          dataType: 'json',
          data: JSON.stringify( request ),
          type: 'POST',
          contentType: 'application/json',
          success: function( data ){callback( data )}
     });
 }
 
 function afterResponse( data ) {
$( "#test" ).html( data.message )
 }
  </script>
  
</head>
<body>
  <div id="test">
<button onclick="jolieCall( {'message':'Ciao'}, afterResponse )">TEST</button>
  </div>
</body>
</html>

That's all! Try to open the index.html page on the browser and click on TEST button. It is worth noting that Jolie recognize the JSON format and sends back a JSON message.

Now, try to add other operations and create a more complex web application.
Have fun!