We intended to do an in-place sort of the mount-point list, but
sorted() returns a new list that wasn't captured. Move to the .sort()
function.
It seems the existing unit-test missed this. Add a new test taken
from the bug which does exhibit a sorting issue. Also added a
unit-test of just the comparitor for sanity.
Closes-Bug: 1699437
Change-Id: I8101e4a1804a4af7dbda20d48bf362c3f4ad2742
This provides a basic LVM support to dib-block-device.
Co-Authored-By: Ian Wienand <iwienand@redhat.com>
Change-Id: Ibd624d9f95ee68b20a15891f639ddd5b3188cdf9
The MBR Partition Table Entry (PTE) allows one to specify many
possible partition types and one of the benefits of this is being able
to specify the CHS variant or the LBA variant.
By default, LBA only creates partitions of type 0x83 (of course,
that's only because the documentation doesn't tell you how to make it
do anything else).
I will take up Ian's suggestion in patch set 2 for a more rigorous
test in an independent patch set.
Change-Id: If3068535980eac2e58d4025444c65147a8c7fedc
Closes-Bug:#1703352
We introduced the "settle" in
I90103b59357edebbac7a641e8980cb282d37561b thinking that maybe kpartx
had not finished writing the partition. This probably wasn't a bad
first assumption, since we used to have this -- but is seems
insufficient.
The other failiure here seems to be if kpartx hasn't actually seen the
updated partition table in the image, so it has correctly (in it's
mind) not mounted the partition.
Looking at strace of fdisk run manually on a loopback, it will do a
fsync on the raw device after writing and then a global sync as it
exits.
This replicates this; we flush and fsync in mbr.py in the exit handler
after writing the partition, before closing the file (i've updated one
of the unit tests to double-check the call). In the partitioning.py
caller we execute a sync call too.
Since it does seem unlikely the "-s" option of kpartx is not working,
I've removed the udev settle work-around too.
Change-Id: Ia77a0ffe4c76854b326ed76490479d9c691b49aa
Partial-Bug: #1698337
Currently we pass a reference to a global "rollback" list to create()
to keep rollback functions. Other nodes don't need to know about
global rollback state, and by passing by reference we're giving them
the chance to mess it up for everyone else.
Add a "add_rollback()" function in NodeBase for create() calls to
register rollback calls within themselves. As they hit rollback
points they can add a new entry. lambda v arguments is much of a
muchness -- but this is similar to the standard atexit() call so with
go with that pattern. A new "rollback()" call is added that the
driver will invoke on each node as it works its way backwards in case
of failure.
On error, nodes will have rollback() called in reverse order (which
then calls registered rollbacks in reverse order).
A unit test is added to test rollback behaviour
Change-Id: I65214e72c7ef607dd08f750a6d32a0b10fe97ac3
Keep track of the mount-point ordering in a state variable, rather
than a global. This path is tested by existing unit tests.
Note a prior change inserted the MountNode objects directly into a
list in self.state, which makes sorting quite easy as it can just
implement __lt__. Unfortunately we still json dump the state, and
thus we can't have aribtrary objects in it (future work may be to
check keys inserted into the status object...). So we have to do a
bit of wrangling with tuple lists and comparision functions here, but
it's not too bad.
Change-Id: I0c51e0c53c4efdb7a65ab0efe09a6780cb1affa8
As we add file-systems, add them to global state and check the labels
are uniqiue. Add a unit test and remove the old global value.
Bonus fixup to the length check, and a test for that too.
Change-Id: I0f5a96f687c92e000afc9c98a26c49c4b1d3f28d
With I468dbf5134947629f125504513703d6f2cdace59 each node has a
reference to the global state object. This means it gets pickled into
the node-list, which is loaded for later calls. There is no need to
reload the state.json it and pass it for later cmd_* calls, as the
nodes can see it via the unpickled self.state
Change-Id: I9e2f8910f17599d92ee33e7df8e36d8ed4d44575
Making the global state reference a defined part of the node makes
some parts of the block device processing easier and removes the need
for other global values.
The state is passed to PluginNodeBase.__init__() and expected to be
passed into all nodes as they are created. NodeBase.__init__() is
updated with the new paramater 'state'.
The parameter is removed from the create() call as nodes can simply
reference it at any point as "self.state".
This is similar to 1cdc8b20373c5d582ea928cfd7334469ff36dbce, except it
is based on I68840594a34af28d41d9522addcfd830bd203b97 which loads the
node-list from pickled state for later cmd_* calls. Thus we only
build the state *once*, at cmd_create() time as we build the node
list.
Change-Id: I468dbf5134947629f125504513703d6f2cdace59
Currently the later cmd_* calls -- umount, cleanup, delete -- all
recreate the node graph by parsing the config file using
create_graph()
There is some need, however, to have a sense of global state when
building the node list. The problem is, this is a one time operation
-- we do not want to rebuild that state for these later calls (see the
"loaded" checks in proposed
Ic3b805f9258128d5233b21ff25579c03487c7fcc).
An insight here seems to be that these cmd_* calls do not actually
want to re-parse the configuration file and rebuild the node list;
they just want to walk the node list in reverse with the state as
provided after cmd_create().
So, rather than re-creating the node list, we might as well just
pickle it, save it to disk along side the state dictionary dump and
reload it for cmd_*.
After this, I think we can safely have PluginBase.__init__() be passed
the state. We will now know that this will only be called once,
during initial creation.
Change-Id: I68840594a34af28d41d9522addcfd830bd203b97
You can't pickle a static method reference which complicates being
able to save the node graph when the "rollback" call-back wants to
hold references to these functions. The outer module (localoop.py) is
small anyway, so from an organisation point of view the difference is
minimal. Since these are really only called with parameters from the
containing class, they could be class methods with no parameters, at
the small expense of having to fiddle the mbr test-case a bit.
Change-Id: I6f9592a4295abe1b41294b79828bc2f3c2da01c6
This is code motion with some small changes to make follow-on's
easier.
test_blockdevice_mbr.py is moved alongside the other tests. It is
modified slightly to use the standard base class and remove a lot of
repeated test setup; a fixture is used for the tempdir (so it doesn't
have to be torn-down, and is removed properly on error) and the partx
args are moved into the setUp() so each test doesn't have to create
it. No functional change. renamed test_mbr.py for shortness.
test_blockdevice_utils.py is merged with existing test_utils.py. No
change to the tests.
test_blockdevice.py is removed. It isn't doing anything currently; to
work it will need to take an approach based more on mocking of calls
that require elevated permissions. It's in history if we need it.
Change-Id: I87b1ea94afaaa0b44e6a57b9d073f95a63a04cf0
assertRaisesRegexp was renamed to assertRaisesRegex in Py3.2
For more details, please check:
https://docs.python.org/3/library/
unittest.html#unittest.TestCase.assertRaisesRegex
Change-Id: I705c958c0dbf1daa409ed29ccbc038426298c306
Closes-Bug: #1436957
Currently we keep a global list of mount-points defined in the
configuration and automatically setup dependencies between mount nodes
based on their global "mount order" (i.e. higher directories mount
first).
The current method for achieving this is roughly to add the mount
points to a dictionary indexed my mount-point, then at "get_edge()"
call build the sorted list ... unless it has already been built
because this gets called for every node.
It seems much simpler to simply keep a sorted list of the
MountPointNode objects as we add them. We don't need to implement a
sorting algorithm then, we can just use sort() and implement __lt__
for the nodes.
I believe the existing mount-order unit testing is sufficient; I'm
struggling to find a valid configuration where the mount-order is
*not* correctly specified in the configuration graph.
Change-Id: Idc05cdf42d95e230b9906773aa2b4a3b0f075598
A couple of things going on, but I think it makes sense to do them
atomically.
The NodeBase.create() argument "results" is the global state
dictionary that will be saved to "state.json", and re-loaded in later
phases and passed to them as the argument "state". So for
consistency, call this argument "state" (this fits with the change out
to start building the state dictionary earlier in the
PluginBase.__init__() calls).
Since the "state" is a pretty important part of how everything works,
move it into a separate object. This is treated as essentially a
singleton. It bundles it nicely together for some added
documentation [1].
We move instantiation of this object out of the generic
BlockDevice.__init__() call and into the actual cmd_* drivers. This
is because there's two distinct instantiation operations -- creating a
new state (during cmd_create) and loading an existing state (other
cmd_*). This is also safer -- since we know the cmd_* arguments are
looking for an existing state.json, we will fail if it somehow goes
missing.
To more fully unit test this, some testing plugins and new
entry-points are added. These add known state values which we check
for. These should be a good basis for further tests.
[1] as noted, we could probably do some fun things in the future like
make this implement a dictionary and have some saftey features like
r/o keys.
Change-Id: I90eb711b3e9b1ce139eb34bdf3cde641fd06828f
This was suggested in a review comment in
I8a5d62a076a5a50597f2f1df3a8615afba6dadb2. It works out quite nicely
because the BlockDevice() driver now doesn't need to know anything
about stevedore or plugins, and just works on the node list. It also
simplifies the unit testing by not having to call create_graph through
a BlockDevice object.
Change-Id: I98512f6cf42e256d2ea8225a0b496d303bf357b8
This completes the transitions started in
Ic5a61365ef0132476b11bdbf1dd96885e91c3cb6
The new file plugin.py is the place to start with this change. The
abstract base classes PluginBase and NodeBase are heavily documented.
NodeBase essentially replaces Digraph.Node
The changes in level?/*.py make no functional changes, but are just
refactoring to implement the plugin and node classes consistently.
Additionally we have added asserts during parsing & generation to
ensure plugins are implemented PluginBase, and get_nodes() is always
returning NodeBase objects for the graph.
Change-Id: Ie648e9224749491260dea65d7e8b8151a6824b9c
This switches the code to use networkx for the digraph implementation.
Note that the old implementation specifically isn't removed in this
change -- for review clarity. It will be replaced by a base class
that defines things properly to the API described below.
Plugins return a node object with three functions
get_name() : return the unique name of this node
get_nodes() : return a list of nodes for insertion into the graph.
Usually this is just "self". Some special things like partitioning
add extra nodes at this point, however.
get_edges() : return a tuple of two lists; edges_from and edges_to
As you would expect the first is a list of node names that points to
us, and the second is a list of node names we point to. Usually
this is only populated as ([self.base],[]) -- i.e. our "base" node
points to us. Some plugins, such as mounting, create links both to
and from themselves, however.
Plugins have been updated, some test cases added (error cases
specifically)
Change-Id: Ic5a61365ef0132476b11bdbf1dd96885e91c3cb6
This moves to a more generic config parser that doesn't have plugins
parsing part of the tree.
I understand why it ended up that way; we have "partitions" key which
has special semantics compared to others keys and there was a desire
to keep it isolated from core tree->graph code. But this isn't really
isolated; you have to reverse-engineer several module-crossing
boundaries, extras classes and repetitive recursive functions.
Ultimately, plugins should have access to the node graph, but not
participate in configuration parsing. This way we ensure that plugins
can't invent new methods of configuration parsing.
Note: unit tests produce the same tree -> graph conversion as the old
method. i.e. this is not intended to have a functional change.
Change-Id: I8a5d62a076a5a50597f2f1df3a8615afba6dadb2
Add a range of unit-testing for configuration parsing, graph
generation and mount-point generation. Unfortunately there's some
global variable hacks, and some stubs, but it's a start.
Change-Id: I9e4f950c2c2ea656fc0c1a14594059fb4c62fa35
Block device handling can be somewhat complex - especially
when taking things like md, lvm or encryption into account.
This patch factors out the creation and deletion of the local
loop image device handling into a python library.
The main propose of this patch is to implement the needed
infrastructure. Based on this, more advanced functions can be added.
Example: (advanced) partitioning, LVM, handling different boot
scenarios (BIOS, UEFI, ...), possibility of handling multiple images
(local loop image, iSCSI, physical hard disk, ...), handling of
different filesystems for different partitions / LVs.
Change-Id: Ib626b36a00f8a5dc3dbde8df3e2619a2438eaaf1
Signed-off-by: Andreas Florath <andreas@florath.net>