Directly interacting with the LXD API
Stéphane Graber
on 18 April 2016
The next post in the LXD series is currently blocked on a pending kernel fix, so I figured I’d do an out of series post on how to use the LXD API directly.
Setting up the LXD daemon
The LXD REST API can be accessed over either a local Unix socket or over HTTPs. The protocol in both case is identical, the only difference being that the Unix socket is plain text, relying on the filesystem for authentication.
To enable remote connections to you LXD daemon, run:
lxc config set core.https_address "[::]:8443"
This will have it bind all addresses on port 8443.
To setup a trust relationship with a new client, a password is required, you can set one with:
lxc config set core.trust_password <some random password>
Local or remote
curl over unix socket
As mentioned above, the Unix socket doesn’t need authentication, so with a recent version of curl, you can just do:
stgraber@castiana:~$ curl --unix-socket /var/lib/lxd/unix.socket s/ {"type":"sync","status":"Success","status_code":200,"metadata":["/1.0"]}
Not the most readable output. You can make it a lot more readable by using jq:
stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket s/ | jq . { "type": "sync", "status": "Success", "status_code": 200, "metadata": [ "/1.0" ] }
curl over the network (and client authentication)
The REST API is authenticated by the use of client certificates. LXD generates one when you first use the command line client, so we’ll be using that one, but you could generate your own with openssl if you wanted to.
First, lets confirm that this particular certificate isn’t trusted:
curl -s -k --cert ~/.config/lxc/client.crt --key ~/.config/lxc/client.key https://127.0.0.1:8443/1.0 | jq .metadata.auth "untrusted"
Now, lets tell the server to add it by giving it the password that we set earlier:
stgraber@castiana:~$ curl -s -k --cert ~/.config/lxc/client.crt --key ~/.config/lxc/client.key https://127.0.0.1:8443/1.0/certificates -X POST -d '{"type": "client", "password": "some-password"}' | jq . { "type": "sync", "status": "Success", "status_code": 200, "metadata": {} }
And now confirm that we are properly authenticated:
stgraber@castiana:~$ curl -s -k --cert ~/.config/lxc/client.crt --key ~/.config/lxc/client.key https://127.0.0.1:8443/1.0 | jq .metadata.auth "trusted"
And confirm that things look the same as over the Unix socket:
stgraber@castiana:~$ curl -s -k --cert ~/.config/lxc/client.crt --key ~/.config/lxc/client.key https://127.0.0.1:8443/ | jq . { "type": "sync", "status": "Success", "status_code": 200, "metadata": [ "/1.0" ] }
Walking through the API
To keep the commands short, all my examples will be using the local Unix socket, you can add the arguments shown above to get this to work over the HTTPs connection.
Note that in an untrusted environment (so anything but localhost), you should also pass LXD the expected server certificate so that you can confirm that you’re talking to the right machine and aren’t the target of a man in the middle attack.
Server information
You can get server runtime information with:
stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket a/1.0 | jq . { "type": "sync", "status": "Success", "status_code": 200, "metadata": { "api_extensions": [], "api_status": "stable", "api_version": "1.0", "auth": "trusted", "config": { "core.https_address": "[::]:8443", "core.trust_password": true, "storage.zfs_pool_name": "encrypted/lxd" }, "environment": { "addresses": [ "192.168.54.140:8443", "10.212.54.1:8443", "[2001:470:b368:4242::1]:8443" ], "architectures": [ "x86_64", "i686" ], "certificate": "BIG PEM BLOB", "driver": "lxc", "driver_version": "2.0.0", "kernel": "Linux", "kernel_architecture": "x86_64", "kernel_version": "4.4.0-18-generic", "server": "lxd", "server_pid": 26227, "server_version": "2.0.0", "storage": "zfs", "storage_version": "5" }, "public": false } }
Everything except the config section is read-only and so doesn’t need to be sent back when updating, so say we want to unset that trust password and have LXD stop listening over https, we can do that with:
stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X PUT -d '{"config": {"storage.zfs_pool_name": "encrypted/lxd"}}' a/1.0 | jq . { "type": "sync", "status": "Success", "status_code": 200, "metadata": {} }
Operations
For anything that could take more than a second, LXD will use a background operation. That’s to make it easier for the client to do multiple requests in parallel and to limit the number of connections to the server.
You can list all current operations with:
stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket a/1.0/operations | jq . { "type": "sync", "status": "Success", "status_code": 200, "metadata": { "running": [ "/1.0/operations/008bc02e-21a0-4070-a28c-633b79a46517" ] } }
And get more details on it with:
stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket a/1.0/operations/008bc02e-21a0-4070-a28c-633b79a46517 | jq . { "type": "sync", "status": "Success", "status_code": 200, "metadata": { "id": "008bc02e-21a0-4070-a28c-633b79a46517", "class": "task", "created_at": "2016-04-18T22:24:54.469437937+01:00", "updated_at": "2016-04-18T22:25:22.42813972+01:00", "status": "Running", "status_code": 103, "resources": { "containers": [ "/1.0/containers/blah" ] }, "metadata": { "download_progress": "48%" }, "may_cancel": false, "err": "" } }
In this case, it was me creating a new container called “blah” and the image is tracking the needed download, in this case of the Ubuntu 14.04 image.
You can subscribe to all operation notifications by using the /1.0/events websocket, or if your client isn’t that smart, you can just block on the operation with:
curl -s --unix-socket /var/lib/lxd/unix.socket a/1.0/operations/b1f57056-c79b-4d3c-94bf-50b5c47a85ad/wait | jq .
Which will print a copy of the operation status (same as above) once the operation reaches a terminal state (success, failure or canceled).
The other endpoints
The REST API currently has the following endpoints:
- /
- /1.0
- /1.0/certificates
- /1.0/certificates/<fingerprint>
- /1.0/containers
- /1.0/containers/<name>
- /1.0/containers/<name>/exec
- /1.0/containers/<name>/files
- /1.0/containers/<name>/snapshots
- /1.0/containers/<name>/snapshots/<name>
- /1.0/containers/<name>/state
- /1.0/containers/<name>/logs
- /1.0/containers/<name>/logs/<logfile>
- /1.0/containers/<name>
- /1.0/events
- /1.0/images
- /1.0/images/<fingerprint>
- /1.0/images/<fingerprint>/export
- /1.0/images/<fingerprint>
- /1.0/images/aliases
- /1.0/images/aliases/<name>
- /1.0/networks
- /1.0/networks/<name>
- /1.0/operations
- /1.0/operations/<uuid>
- /1.0/operations/<uuid>/wait
- /1.0/operations/<uuid>/websocket
- /1.0/operations/<uuid>
- /1.0/profiles
- /1.0/profiles/<name>
- /1.0/certificates
- /1.0
Detailed documentation on the various actions for each of them can be found here.
Basic container life-cycle
Going through absolutely everything above would make this blog post enormous, so lets just focus on the most basic things, creating a container, starting it, dealing with files a bit, creating a snapshot and deleting the whole thing.
Create
To create a container named “xenial” from an Ubuntu 16.04 image coming from https://cloud-images.ubuntu.com/daily (also known as ubuntu-daily:16.04 in the client), you need to run:
stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X POST -d '{"name": "xenial", "source": {"type": "image", "protocol": "simplestreams", "server": "https://cloud-images.ubuntu.com/daily", "alias": "16.04"}}' a/1.0/containers | jq . { "type": "async", "status": "Operation created", "status_code": 100, "metadata": { "id": "e2714931-470e-452a-807c-c1be19cdff0d", "class": "task", "created_at": "2016-04-18T22:36:20.935649438+01:00", "updated_at": "2016-04-18T22:36:20.935649438+01:00", "status": "Running", "status_code": 103, "resources": { "containers": [ "/1.0/containers/xenial" ] }, "metadata": null, "may_cancel": false, "err": "" }, "operation": "/1.0/operations/e2714931-470e-452a-807c-c1be19cdff0d" }
This confirms that the container creation was received. We can check for progress with:
stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket a/1.0/operations/e2714931-470e-452a-807c-c1be19cdff0d | jq . { "type": "sync", "status": "Success", "status_code": 200, "metadata": { "id": "e2714931-470e-452a-807c-c1be19cdff0d", "class": "task", "created_at": "2016-04-18T22:36:20.935649438+01:00", "updated_at": "2016-04-18T22:36:31.135038483+01:00", "status": "Running", "status_code": 103, "resources": { "containers": [ "/1.0/containers/xenial" ] }, "metadata": { "download_progress": "19%" }, "may_cancel": false, "err": "" } }
And finally wait until it’s done with:
stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket a/1.0/operations/e2714931-470e-452a-807c-c1be19cdff0d/wait | jq . { "type": "sync", "status": "Success", "status_code": 200, "metadata": { "id": "e2714931-470e-452a-807c-c1be19cdff0d", "class": "task", "created_at": "2016-04-18T22:36:20.935649438+01:00", "updated_at": "2016-04-18T22:38:01.385511623+01:00", "status": "Success", "status_code": 200, "resources": { "containers": [ "/1.0/containers/xenial" ] }, "metadata": { "download_progress": "100%" }, "may_cancel": false, "err": "" } }
Start
Starting the container is done my modifying its running state:
stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X PUT -d '{"action": "start"}' a/1.0/containers/xenial/state | jq . { "type": "async", "status": "Operation created", "status_code": 100, "metadata": { "id": "614ac9f0-f0fc-4351-9e6f-14710fd93542", "class": "task", "created_at": "2016-04-18T22:39:42.766830946+01:00", "updated_at": "2016-04-18T22:39:42.766830946+01:00", "status": "Running", "status_code": 103, "resources": { "containers": [ "/1.0/containers/xenial" ] }, "metadata": null, "may_cancel": false, "err": "" }, "operation": "/1.0/operations/614ac9f0-f0fc-4351-9e6f-14710fd93542" }
If you’re doing this by hand as I am right now, there’s no way you can actually access that operation and wait for it to finish as it’s very very quick and data about past operations disappears 5 seconds after they’re done.
You can however check the container running state:
stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X GET a/1.0/containers/xenial/state | jq .metadata.status "Running"
Or even get its IP address with:
stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X GET a/1.0/containers/xenial/state | jq .metadata.network.eth0.addresses [ { "family": "inet", "address": "10.212.54.43", "netmask": "24", "scope": "global" }, { "family": "inet6", "address": "2001:470:b368:4242:216:3eff:fe17:331c", "netmask": "64", "scope": "global" }, { "family": "inet6", "address": "fe80::216:3eff:fe17:331c", "netmask": "64", "scope": "link" } ]
Read a file
Reading a file from the container is ridiculously easy:
stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X GET a/1.0/containers/xenial/files?path=/etc/hosts 127.0.0.1 localhost # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts
Push a file
Pushing a file is only more difficult because you need to set the Content-Type to octet-stream:
stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X POST -H "Content-Type: application/octet-stream" -d 'abc' a/1.0/containers/xenial/files?path=/tmp/a {"type":"sync","status":"Success","status_code":200,"metadata":{}}
We can then confirm it worked with:
stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X GET a/1.0/containers/xenial/files?path=/tmp/a abc
Snapshot
To make a snapshot, just run:
stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X POST -d '{"name": "my-snapshot"}' a/1.0/containers/xenial/snapshots | jq . { "type": "async", "status": "Operation created", "status_code": 100, "metadata": { "id": "d68141de-0c13-419c-a21c-13e30de29154", "class": "task", "created_at": "2016-04-18T22:54:04.148986484+01:00", "updated_at": "2016-04-18T22:54:04.148986484+01:00", "status": "Running", "status_code": 103, "resources": { "containers": [ "/1.0/containers/xenial" ] }, "metadata": null, "may_cancel": false, "err": "" }, "operation": "/1.0/operations/d68141de-0c13-419c-a21c-13e30de29154" }
And you can then get all the details about it:
stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X GET a/1.0/containers/xenial/snapshots/my-snapshot | jq . { "type": "sync", "status": "Success", "status_code": 200, "metadata": { "architecture": "x86_64", "config": { "volatile.base_image": "0b06c2858e2efde5464906c93eb9593a29bf46d069cf8d007ada81e5ab80613c", "volatile.eth0.hwaddr": "00:16:3e:17:33:1c", "volatile.last_state.idmap": "[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]" }, "created_at": "2016-04-18T21:54:04Z", "devices": { "root": { "path": "/", "type": "disk" } }, "ephemeral": false, "expanded_config": { "volatile.base_image": "0b06c2858e2efde5464906c93eb9593a29bf46d069cf8d007ada81e5ab80613c", "volatile.eth0.hwaddr": "00:16:3e:17:33:1c", "volatile.last_state.idmap": "[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]" }, "expanded_devices": { "eth0": { "name": "eth0", "nictype": "bridged", "parent": "lxdbr0", "type": "nic" }, "root": { "path": "/", "type": "disk" } }, "name": "xenial/my-snapshot", "profiles": [ "default" ], "stateful": false } }
Delete
You can’t delete a running container, so first you must stop it with:
stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X PUT -d '{"action": "stop", "force": true}' a/1.0/containers/xenial/state | jq . { "type": "async", "status": "Operation created", "status_code": 100, "metadata": { "id": "97945ec9-f9b0-4fa8-aaba-06e41a9bc2a9", "class": "task", "created_at": "2016-04-18T22:56:18.28952729+01:00", "updated_at": "2016-04-18T22:56:18.28952729+01:00", "status": "Running", "status_code": 103, "resources": { "containers": [ "/1.0/containers/xenial" ] }, "metadata": null, "may_cancel": false, "err": "" }, "operation": "/1.0/operations/97945ec9-f9b0-4fa8-aaba-06e41a9bc2a9" }
Then you can delete it with:
stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket -X DELETE a/1.0/containers/xenial | jq . { "type": "async", "status": "Operation created", "status_code": 100, "metadata": { "id": "439bf4a1-e056-4b76-86ad-bff06169fce1", "class": "task", "created_at": "2016-04-18T22:56:22.590239576+01:00", "updated_at": "2016-04-18T22:56:22.590239576+01:00", "status": "Running", "status_code": 103, "resources": { "containers": [ "/1.0/containers/xenial" ] }, "metadata": null, "may_cancel": false, "err": "" }, "operation": "/1.0/operations/439bf4a1-e056-4b76-86ad-bff06169fce1" }
And confirm it’s gone:
stgraber@castiana:~$ curl -s --unix-socket /var/lib/lxd/unix.socket a/1.0/containers/xenial | jq . { "error": "not found", "error_code": 404, "type": "error" }
Conclusion
The LXD API has been designed to be simple yet powerful, it can easily be used through even the most simple client but also supports advanced features to allow more complex clients to be very efficient.
Our REST API is stable which means that any change we make to it will be fully backward compatible with the API as it was in LXD 2.0. We will only be doing additions to it, no removal or change of behavior for the existing end points.
Support for new features can be detected by the client by looking at the “api_extensions” list from GET /1.0. We currently do not advertise any but will no doubt make use of this very soon.
Extra information
The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it
What’s the risk of unsolved vulnerabilities in Docker images?
Recent surveys found that many popular containers had known vulnerabilities. Container images provenance is critical for a secure software supply chain in production. Benefit from Canonical’s security expertise with the LTS Docker images portfolio, a curated set of application images, free of vulnerabilities, with a 24/7 commitment.
Newsletter signup
Related posts
LXD vs Docker
When talking about containers, a common confusion for potential users of LXD is that LXD is an alternative to Docker or Kubernetes. However, LXD and Docker...
Join Canonical in London at Dell Technologies Forum
Canonical is excited to be partnering with Dell Technologies at the upcoming Dell Technologies Forum – London, taking place on 26th November. This prestigious...
Canonical announces the first MicroCloud LTS release
Canonical announces the first MicroCloud LTS release. MicroCloud 2.1.0 LTS features support for single-node deployments, improved security posture, and more...