First and foremost, you are quite right in saying: "it's very soon (for me at least) to do such a thing" Since it really only runs on linux. However i am aware that Microsoft are working diligently to get it working with their servers and azure platform which should provide some exciting opportunities to port linux applications to windows and vice versa.
I have looked into it, and i would definitely say it's something im going to be using consistently over any other virtualization software, including vagrant. The reasons being:
a. It uses far less system resources then vagrant if you need multiple virtual machines. For example if you're using a storage solution that is distributed (relational databases with sharding, couchbase, etc) and are developing to scale horizontally. Mimicking a server cluster with vagrant locally for development is quite tricky and taxing on system resources. However with docker you can have multiple containers that use the same kernel so you can 'spin up' 5, 10 or 15 servers with very little impact to system performance.
b. You can split your development environment into multiple pieces. Something that's bugged me for a while is how vagrant is an all-in-one type of solution. For example take homestead: http://laravel.com/docs/4.2/homestead Now nothing against this, as one that's used it almost religiously, it is a fantastic solution for developers that lets them start coding ASAP regardless of which OS they use natively.
The problem is what happens when you move from development to deployment/updating? For instance you're probably only using one form of database storage (in this case lets say it's MySQL) yet homestead ships with both postgres and redis as well, because it has to cater for people that would be using those other technologies. Lets look at it from another angle, by default it ships with NGINX, but what if you're using nodeJS as the server or some other OS (Litespeed, apache)... there is no reason for NGINX being present in that case.
As a result you can't just take your virtual machine and push the contents to a server because there would be natural redundancies. However with docker you can have, different things in different containers and then swap them out like lego blocks and only deploy what you need: - front end dev tools (NodeJS, Ruby, SASS, yeoman, bower, gulp, grunt) - database tech (couchbase, postgres, mysql, etc) - server tech (NGINX, apache, etc). - kernel
For example, lets say you're only developing an opensource simple static website. In that case you only need to spin up the 'front end dev tools' container and you can share that whole when you're done without having to mess around removing other components.
c. Every change to a container is managed like a git commit therefore for example you can sync up with your server in the cloud and any changes that are made can be pushed to the server in that form without having to reupload the whole VM image.
Finally you may be thinking this is all very well and good but what if i don't want to have to manage individual containers but i still want the modularity? Never fear, docker purchased fig a little while back: http://www.fig.sh/ A simple YAML file and command from the terminal and you can spin up multiple containers with whatever you want in them.
Docker is definitely worth a look (particularly if you run linux natively), but is still in it's infancy for other platforms.