How good is flex? The future of symfony

Since Fabien Potencier first write about symfony flex and the evolution for version 4 and above i start following the post serie and its development. Besides one or two simple tests was not able to use it for real.

Recently i start a new project that is build from scratch and since the development circle involves the launch of symfony 4 we decide to start with flex from the start, use the new project structure and be prepare for that version, and why not enjoy the journey with the new good practices and this flex flow. Another good point that set us on this path was that we will build a micro-service architecture and symfony 4 fits just fine on that pattern.

Flex is just as Fabien describes it:

Symfony Flex is all about making it simple and easy to create any Symfony applications, from the most simple micro-style project to the more complex ones with dozens of dependencies. It automates adding and removing bundles. It takes care of giving you sensible defaults. It helps discovering good bundles.

and it really what it promise, you just need to install new libs and bundles and it auto-configure it for you. The recipes published on the official repository are well integrated with this new feature, not so much the ones on the contrib repository but I think that on this earlier stage is expected to be like that and will improve for sure.

The fingerprint of a fresh installed project is now really small compared with version 3.3 and earlier, what it’s really nice for a ñn small application and even better for a micro-service.

But what about bundles that does not have a recipe yet or even those bundles that does not provide a well configured one, well it’s just Symfony at the end you can do the configuration your self it will take a bit longer but it will work as usual at the end.

Bottom line is that the future of Symfony is brilliant and I bet it will get even better. My advice take a leap of faith and use flex you will love it.


Symfony Business Model and Symfony

If you are brave enough to get out of the confort zone and structure of applications over symfony and follow a more clean path you will may be facing some challenges on the road, like integration with ORMs (Doctrine).

In this particular post i will talk about about some strategies i have follow in the past and what i have learn from it.

Business Model and Doctrine Entities/Documents

The first strategy i follow was to keep the doctrine mapping under the bundle Entity or Document directory and inherit from business clases. Given that i want to leave the business package without any reference to database or framework i choose to use yaml or xml for the mapping language instead of annotations.

The first challenge with this strategy is that objects are created on the business model so when it gets to the entity/document manager it doctrine does not recognize it and throw and error. The way i found to handle this was to implement a fromBusinessEntity function on each entity/document class so before it get persisted or updated it go through this function and convert it self to an object under the doctrine scope.

This has a big downside, since you have to convert every object and those objects may have references, embedded documents or any other complex structure it will require a big logic to handle it, and with more code comes performance degradation.

The only good side of this approach is that developers that are use to use symfony as it is and do not have interacted with a model designed by package responsibilities may feel a bit confortable on where to find the entities/documents.

Business Model as Doctrine Entities/Documents

But i cant stop with that solution i feel it was not right so i decide to dig in to the doctrine mapping configuration. My findings were that we can actually tell doctrine were to seek for mapping references so we can do something like

# Doctrine Configuration
            server: "%mongodbServer%"
    default_database: "%mongodbDatabase%"
                    type: yml
                    dir: "%kernel.root_dir%/../../BusinessPackage"
                    prefix: BusinessPackage

This way your business classes can act as doctrine entities/documents reducing complexity on your framework business integration and dropping the performance degradation that may be introduced on previous approach.


Between previous approaches i personally choose the second one because i like to keep my code as clean as possible and business independently of frameworks.

It may be even a better way to this and im counting on it, so we can keep learning and improving.

Every time we decide to use a new approach on how to deal with frameworks, architecture patterns, code structure, etc; we put our self in front of a new challenge, we may have the experience from old projects we may try new approaches, the goal must be always learn from our mistakes and improve the next phase.

OroPlatform an Skyscraper on top of Symfony Foundations

Say What?

OroPlatform as they show on their site:

… is an Open-Source Business Application Platform (BAP).
It offers developers the exact business application platform they’ve been looking for, by combining the tools they need. Built in PHP5 and the Symfony2 framework …

Its compose of several bundles and libraries that allows you to build “Business Applications” in less time that if you need to develop it from scratch.

As an open source project you can find its code on its github repository and read the official documentation

There are real life applications build on top of this platform:

  • OROCRM. Their main product, the name can point to its value.
  • OROCommerce: A new product also delivered by this company. A B2B e-commerce platform.
  • Akeneo: A Product Information Management platform.
  • Diamante Desk: A help desk
  • TimeLap: A module to track time in OROCRM
  • Marello: ERP for e-commerce

How do you know?

Recently i had an interaction with this platform an was both exiting and frustrating.

The Good

Easy to install

You can easily install the platform just by cloning the repository, perform a composer install and execute the symfony console command for installation or go to the browser, it will show an installation setup. You can find this steps on the official documentation

Value from the start

As previously mention, the platform its compose by several bundles that provide implementation for common tasks and easily extendable through configuration.

Right after installation you got a fully function application with a lot of values, user management and permissions, y ACL; Dashboard, Configurations, Activities, and so much more.


It provides a REST api by default and your bundles will inherit this behavior so you will be delivering a product capable of interact with the mobile world just with 0 effort.


Creating new bundles are really easy nothing different from symfony. But the power of this platform its on how you can add value to your new bundle and interact with this whole platform. You can add tags to your new entity, ownership, activities like email, notes and comments, create dashboard widgets and search over your entities. Those are just a few of the many values you can add to your bundle and to you business at the end.

The Bad

Sadly the documentation is not as detailed as we may expect. That will in many cases face us to miss incredible features or even drop it for lack of examples on how enabling it. As a platform and a open sorce project that can be use as base for many other great products it should be more documented. Maybe the community can help in this scenario.


I really recommend this project, the value you obtain with it its really big just to mention on this post, but as a user of the platform i really recommend to the developer team to focus a little bit on the docs, try to code examples of all use cases, document reference on configurations and interactions. An as a community we should also help to document it.

At the end, thanks so much oro-crm team, excellent work

Fast development vs Agile development

In the past few years i have encounter the same issue in so many different companies and so many different projects, regarding the area or scope of the project, regarding if it was an stable and profitable software solution or an start-up, regarding the experience of the project manager; the battle between fast development and agile development its the biggest issue in today development teams.

Fast development

The concept of fast development is so false as its name, but let start asking why is this requirement even born.

The quick answer is a false attempt of apply Agile development. When you are required to deliver software solutions ASAP you are basically required to “fast-develop”, but its that right?

On projects on the category of start-ups, the budget its limited and in so many cases are really small, so the client always want to deliver more with less money and faster as light.

Most of the time in this “fast-develop” attempts, we, as developers; are force to code without testing, without design patterns, without research on the state of the art on what we are doing and even without common sense.

As professionals we should not accept this and deliver software solutions with great code quality, software solutions that can be easily extensible and flexible to changes.

On my personal experience, at so many cases i was put under this circumstances and force to deliver code with so low quality that 90% of the time requires refactoring and a lot of debugging. Shame on me.

That was not fast development at all, the main propose of the “fast-development” its try to deliver software solutions that works, and hopefully wont break. But how can you accomplish this with so poor design and cero good programming practices or principles applied?

The answer its clear, you can’t.

Agile Development

Agile does not mean fast, agile is about moving quickly and easily.

At software industry agile means to build the software solution step by step, delivering value at each step but taking all the measures to deliver it right.

That means, take the time to research, the time to design, the time to apply good programming practices and principles, the time to take choices with care, to think on value, to build good software solutions.

Its actually accountable that projects build upon Agile are in fact faster that those who use “fast-development”.

I took this little experiment on one project some time ago. I was team lead at the time and the team was using or at least trying to use SCRUM, so i begin to measure how fast the team was: based on if the team delivers or not on each sprint and how much fix of that value need to be re-introduce on the next sprint. The result was on a 40%-50% of that value, so in terms on how fast the team was the answer its cristal, only %50 fast.

So with that results over 4 whole sprints i propose a plan to actually be agile and deliver a good value for the same time as the first test, notice that the project was the same, an the team was the same, only the strategy changes. We now schedule time to discuss the solution, to research, to apply design patterns and to test the code. The measures where the same as the first scenario. The results where amazing, the team was 30%-40% faster at the first sprint, 40%-50% faster at the second sprint and 50%-60% faster at the two last sprints. Given that the team was force to think in another direction and we where working with a really bad legacy code, the results where amazing, the team was able to increase the speed at the end.

So what happens, the team keeps working on that bases and having a good retrospective at the end of the sprint, trying to look at sings that may lead to use the bad practices.

So as Robert C. Martin (Uncle Bob) advice on so many of his books and conferences:

By been agile, by applying design patterns and test your code; you will actually been faster by been slow

At the end even on those start-up projects with very limited budget use agile will provide a solution that ensure the quality that the client was expected from the start.


I know that much of this have its roots on professionalism and experience, but we should always look on better ways to do things, to build things. We need to encourage our self to battle and discuss the right ways, that is what make us professionals in the first place.

As a Software Engineer and as a Professional i toke a personal stance to explain, advice, educate and finally agree to code using Agile Development delivering software solutions at the greatest quality i can.

Nodetrine a database abstraction layer for NodeJS

As web developers we all worked with javascript at some point, and maybe not just throwing pure JS code, but using a frameworks like JQuery, AngularJS, ReactJS, or one of the many great frameworks available. Javascript background was on all of us by default.

After Google release the javascript engine, behind the chromium project; named V8 on 2008, the team on Joyet release the first version of NodeJS on 2009. It creates a great point on history for the javascript community, for the first time it has a runtime that execute pure javascript on server side. Now those who only works on “frontend”, and has limited knowledge of other server languages like PHP, Python and Java; can really begin to create full javascript  handled software solutions.

The NodeJS community

The community of NodeJS increase every year at a tremendous rate. Many new projects and frameworks, to easy the developers life on how to build agile and great software solutions; where born.

Projects like express and are the base of many, many projects. Some others aims to be a more complex web frameworks, like sails, totaljs and mean among others; providing a MVC approach and integrating some other tools like Template generation, Form generation, and ORMs

ORMs, Why?

Database access its one of the first tasks  we require to do on most projects, any dynamic application need to persist some data at some point.

Even when BigData and NoSQL databases change the perspective of storing data and build apps; relational databases, like MySQL, Postgres, SQLite and Oracle; are still commonly used.

ORMs provide an OO approach on how we talk to this database engines, and certainly allow us to interact with them in an easy way.

There are already great tools like waterline and sequelizejs already out there.

So why do we need a new one?

Design patterns

There are two common design patterns to implement ORMs solutions

The main difference between those two, and the one i want to bring out from my personal perspective; is the way it couples your code into its own code.

Active Record been the most simplest and easiest of both, requires you to extends its models; so your business code will be coupled to the ORM you choose to its end. In the other hand Data Mapper allows you keep business rules and database code separated, and from and architectural point of view; this is important if you want to keep your code clean, and you should, even when its learning curve can be more complex


There are great implementations of the Data Mapper pattern, two examples are SQLArchemy from the Python community and Doctrine from the PHP community.

Since i do a lot of programming with Symfony and Doctrine is the default ORM, i have more understanding on how it works and was implemented.

So i decide to create a clone of that great project and juggle a little with the names, that its how Nodetrine was born.


As its father Doctrine, Nodetrine will be the organization behind several projects starting from DBAL and followed by ORM


Database Abstraction Layer aims to create a common entry point for all relational databases implemented through the Driver interface, so it can be database engine agnostic and perform the common tasks for interaction with those engines in a seamlessly way.


The project DBAL project has been released to it first version 1.0.0 and can be installed with npm. You can find more info about it on its github page and on readthedocs.

On this first release you can find the base tools to query and manipulate data on the database. There are other tools that will be implemented on the upcoming releases.


I really hope this project can help build amazing apps, but i even desire more that we can create a community around it and collaborate to make it amazing, so feel free to file bugs, write docs, tests it, use it, and write code.

So till next time

Nodetrine its here!!!

(Docker Compose + Docker Swarm) or Kubernetes

Recently the docker team release docker swarm 1.0 been production ready, and been my self an enthusiast that loves to test and try new things i gave it a try.

Some time ago i wrote a post about how to use kubernetes as a developer environment and i use php and symfony for the example. In this case i will only talk about the pros and cons of this two container cluster solutions. So:


Developed and supported by Google, recently this year released also with the version 1.0 as production ready; brings its own perspective on what and how a container cluster should work. Between its concepts, Pods, Controllers, Services, Labels; they allow you to orchestrate your apps infrastructure and relationships between they, and deploy this topology to any environment you want, just pointing the kubectl to the right environment. There are some other tutorials that suggest to use labels to differentiate this environments if you only want to manage just one cluster.

The Good:

It brings the experience from google on how to orchestrate and deploy application on containers, and with it a really production ready full of options to easy the work of the DevOps.

Keep Alive:

A really nice feature its that you can configure you Replication Controller to always keep alive the same exact amount of containers for that specific app. That means that if for some reason any container stops, it will create a new container with a fresh copy allowing your system to be up almost 99% of the time in front of this disaster situations; this also means that if for some reason you have more that the expected amount of containers it will tear down the exact amount of extra containers, so you don’t need to worry about expending on extra resources.

Load Balancing:

This is another really great feature, the services will configure a load balancing in front of all your app containers so you will use it as entry point for it, this also help with the previous feature, if some container its created from scratch it will automatically added to the pool of the load balance.

Scale Up/Down:

Scale you app up or down its really easy just specify the amount of new apps you wanna keep alive and it will do it for you. For me its verbose enough since you need to specify through the command if you want to scale up or down, there is no way you can make a mistake, example:

if you have 3 running copies of you app, you can tell to scale up 2 more apps and you will end with 5, or you can tell to scale down 2 apps and end with just 1 running copy of your app.

This is really important for me, since on production you wanna make 0 mistakes.

Rolling Update:

And the feature that i love the most, this is really amazing, you can deliver updates to all your running instances of your app with a single command with 0 down time, and i think that it just enough explanation on this point. You can check it out on this demo video:

The Bad:

API and Configuration:

K8s bring a new whole API and file definition, so if you are use to work with containers before using docker and docker compose for example you will need to learn a new whole set of commands, and file configuration in order to create and orchestrate you cluster and topology. Depending on you topology it will maybe be a little complicated.

Cluster configuration:

K8s has an specific set of configuration for each kind of cloud (google cloud, Amazon EC2, etc), that introduce a complexity level that its really not needed or wanted at first install. Another consequence of this is that you cant, at least easily; configure your k8s cluster with multiple providers, so you will be anchor to the one cloud solution you choose from start.

Docker Compose and Docker Swarm

Directly from the docker team we get Compose and Swarm, tools that following the UNIX paradigm

Write programs that do one thing and do it well

allow us to design ours application topology and deploy it into our cluster.

Docker Compose:

Expose an easy API to design our application topology, creating all our services, configuring them, connecting them and even scale them.

The Good:

Compose follow the same API as Engine and with that gave us the same set of tools that we are used to work directly on the docker cli, it expand it with some other commands. Its really easy to configure on the docker-compose.yml file and you can even extend those files so you can have a set of configurations for each env.

The Bad:


The scale command, this point may be a little controversial since there its not really a reason to put it on the bad section, it does the job as expected; but on my perspective it could be a little more semantically design. The commands take the number of running services you want running, lets say you scale=3 the app services you will end running 3 instances of the app services as expected, but what if you forgot how many instances you have running of some service and you wanna to scale down only 2 instances, if you type scale=2 you will end with 2 instances and not with 1 as expected, the same goes for the scale up idea. Ok you can say that learn the tool then use it, right; but it will be really great if the command explain it self and help the ops, some scale-down=2 will easy your life.

Keep Alive:

Another missing option its a supervisor that keeps track of how many instances of one services suppose to be running all the time, this kind of solution to prevent disaster will really helps the devops lives. Yes you can say that using he restart police this can be accomplish, but what if it was one of your cluster nodes that meltdown? This kind of feature will start or missing instances on a new node keeping our app running as expected.

Rolling Update and Load Balance:

Have a rolling update functionality and of course a load balancing feature will rise the docker tools to the production ready stage we all wanted to be.


The cluster solution from the docker team, it does the job really great and like the other tools it follows the Engine API.

The Good:

To create a swarm use the docker Machine its the recommended way, and its because docker Machine does a great job provisioning host instances with the docker Engine. Machine allow us to deploy docker into almost any host out there, on any physical host or cloud. Swarm then allow us to connect through the masters all the other nodes, and yes it can be multi provider, for me this is great, not been anchor to any cloud provider or been able to spread my apps through multiple providers its a dream.


Maybe is not right to compare this set of tools so each one of them are designed to accomplish one task, but on the devops worlds a set of tools that bring more to us will certainly have more to win.

From my perspective the docker team its doing a marvelous job, theirs tools are just young  but im pretty sure in the near future all of them will converge and stabilize so it will deliver a great set of tools that interact with each other and allow us to create anything we need. But in the mean time i advice to use k8s since its more production ready for all the tests you may encounter in the day by day devops world.

Symfony a RESTFul app: Security ( Securing the token path – FIXED )

In the previous post we set up a basic symfony project aiming to develop a RESTFul solution, we also talk about the security and how to implement an OAuth2 service, but we also saw a security flaw in one of the most important end points, the /oauth/v2/token one which will deliver the access token to the user.

As always all the code on this series can be found on this github repository under the symfony2_restful_example branch.


First i want to apologize since the previous post aims to fix that security flow of the user password as plain text on the request and after a while i realize i didn’t solve it since it was required even when we use the “rest” authentication provider that we implement on that post. This post will fix that.

The issue:

Lets point the issue one more time, the path to request the access token requires us to provide several data (client_id, client_secret, grant_type, redirect_uri, username, password) been the first four identification of which oauth client are you using and the remaining two as user authentication.

FOSOAuthBundle uses oauth2-php which implements the oauth2 protocol defined in the following draft. That document in the section #4.3.2 explains that in order to request an access token the user need to provides his username and password, so FOSOAuthBundle requires you to pass those fields along in the request as query parameters following the specification.

The issue comes since they are relying in the user provider you configure to perform the authentication (username and password validation) which in most of the cases, included our example; we use FOSUserBundle and this one requires the password in plain text.

How to fix it:

The draft do not point to any specific authentication mechanism so its OK if we made some tweaks and try to fix this issue with another approach.

So what we really need its to send the user password hashed and then on server side validate that password.

To accomplish this we will need to implement an Storage service that FOSOAuthServerBundle will use.


namespace AppBundle\Services;

use FOS\OAuthServerBundle\Model\ClientInterface;
use FOS\OAuthServerBundle\Storage\OAuthStorage as OAuthStorageBase;
use OAuth2\Model\IOAuth2Client;
use Symfony\Component\Security\Core\Exception\AuthenticationException;

class OAuthStorage extends OAuthStorageBase
    public function checkUserCredentials(IOAuth2Client $client, $username, $password)
        if (!$client instanceof ClientInterface) {
            throw new \InvalidArgumentException('Client has to implement the ClientInterface');

        try {
            $user = $this->userProvider->loadUserByUsername($username);
        } catch (AuthenticationException $e) {
            return false;

        if ($user->getPassword() !== $password) {
            return false;

        return array(
            'data' => $user,

And the configuration

# Learn more about services, parameters and containers at
#    parameter_name: value

#    service_name:
#        class: AppBundle\Directory\ClassName
#        arguments: ["@another_service_name", "plain_value", "%parameter_name%"]
        class: AppBundle\Services\OAuthStorage
            - "@fos_oauth_server.client_manager"
            - "@fos_oauth_server.access_token_manager"
            - "@fos_oauth_server.refresh_token_manager"
            - "@fos_oauth_server.auth_code_manager"
            - "@fos_oauth_server.user_provider"
            - "@security.encoder_factory"
        public: false

As you can see we overwrite the checkUserCredentials function and use a simple compare between the user current password and the provided one.

Lets point the FOSOAuthServerBundle configuration to use our new service:


Now we can sent our user password hashed to the token path.


There is a PR #357 on the FOSOAuthServerBundle that i create with this solution, lets wait if it gets approved or a better solutions comes out.


We need to secure user data as much as possible and there are a lot of ways to doing that, which one its the better one? Hard to say, you need to study a lot about this topics and then find what ever you think its the best option for your current project.


  1. Motivation
  2. REST Levels 0, 1, 2 ( FOSRestBundle, JMSSerializerBundle )
  3. REST Levels 0, 1, 2 ( FOSUserBundle )
  4. REST Levels 0, 1, 2 ( NelmioApiDocBundle )
  5. REST Levels 3 ( BazingaHateoasBundle )
  6. Security ( FOSOAuthServerBundle )
  7. Security ( Securing the token path )
  8. Security ( Securing the token path – FIXED )