Development guidelines

  1. Follow PEP8, except:
  • Limit all lines to a maximum of 119 characters
  1. Use git flow
  2. Write docstrings

Flow for feature tasks

  • Create a new branch from develop
git checkout develop
git pull origin develop
git checkout -b feature/task-id
  • Perform brilliant work (don’t forget about tests!)
  • Update CHANGELOG.rst
  • Verify that tests are passing
  • Push all changes to origin (http://code.opennodecloud.com)
  • Create a Merge Request and assign it to a reviewer. Make sure that MR can be merged automatically. If not, resolve
    the conflicts by merging develop branch into yours:
git checkout feature/task-id
git pull origin develop
  • Resolve ticket in JIRA.

Flow for hot fixes

  • TODO

Documentation policy FAQ

  1. Do I need to put all endpoints docs in one file or separately?
  • API endpoints docs go to source code and are extracted into RST.
  1. Where I should add general plugin description?

    2 places:

    • Link in NodeConductor docs.
    • Expanded overview in the Introduction section of the plugin docs.
  2. Where should I describe plugins objects features - quotas, cost tracking, etc?

    • In the guide section.
  1. Where can I see all development policies and guides?

    • NodeConductor documentation, developer section.

API documentation

NodeConductor generates API documentation based on docstrings from following classes:

  • AppConfig class docstring should give general information about an app
  • View class docstring should describe intention of top-level endpoint
  • View method docstring should explain usage of particular method or actions

Endpoints are grouped by Django apps in RST files located in docs/drfapi.

Use following command to generate RST files for the API:

$ nodeconductor drfdocs

Note that you should have all development requirements specified in setup.py file properly installed.

In order to specify docstring for list views you can override list method.

For example,

def list(self, request, *args, **kwargs):
    """
    To get a list of instances, run **GET** against */api/openstack-instances/* as authenticated user.
    Note that a user can only see connected instances:
    """
    return super(InstanceViewSet, self).list(request, *args, **kwargs)

In order to specify docstring for detail views you can override retrieve method.

For example,

def retrieve(self, request, *args, **kwargs):
    """
    To stop/start/restart an instance, run an authorized **POST** request against the instance UUID,
    appending the requested command.
    """

    return super(InstanceViewSet, self).retrieve(request, *args, **kwargs)

NodeConductor plugins

Plugin as extension

NodeConductor extensions are developed as auto-configurable plugins. One plugin can contain several extensions which is a pure Django application by its own. In order to be recognized and automatically connected to NodeConductor some additional configuration required.

Extensions’ URLs will be registered automatically only if settings.NODECONDUCTOR[‘EXTENSIONS_AUTOREGISTER’] is True, which is default.

Create a class inherited from nodeconductor.core.NodeConductorExtension. Implement methods which reflect your app functionality. At least django_app() should be implemented.

Add an entry point of name “nodeconductor_extensions” to your package setup.py. Example:

entry_points={
    'nodeconductor_extensions': ('nodeconductor_demo = nodeconductor_demo.extension:DemoExtension',)
}

Plugin structure

In order to create proper plugin repository structure, please execute following steps:

  1. Install cookiecutter
  2. Install NodeConductor plugin cookiecutter:
cookiecutter https://github.com/opennode/cookiecutter-nodeconductor-plugin.git

You will be prompted to enter values of some variables. Note, that in brackets will be suggested default values.

Plugin documentation

  1. Keep plugin’s documentation within plugin’s code repository.

  2. The documentation page should start with plugin’s title and description.

  3. Keep plugin’s documentation page structure similar to the NodeConductor’s main documentation page:

    • Guide
      • should contain at least installation steps.
    • API
      • should include description of API extension, if any.
  4. Setup readthedocs documentation rendering and issue a merge request against NodeConductor’s repository with a link.

  5. Add section with description and link of the plugin to NodeConductor’s plugin section.

REST permissions

Permissions for viewing

Implemented through the usage of permission classes and filters that are applied to the viewset’s queryset.

class MyModelViewSet(
    # ...
    filter_backends = (filters.GenericRoleFilter,)
    permission_classes = (rf_permissions.IsAuthenticated,
                          rf_permissions.DjangoObjectPermissions)

Permissions for through models

To register permissions for the through-models, one can use a convenience function set_permissions_for_model.

filters.set_permissions_for_model(
    MyModel.ConnectedModel.through,
    customer_path='group__projectrole__project__customer',
    project_path='group__projectrole__project',
)

Permissions for creation/deletion/update

CRU permissions are implemented using django-permission . Filters for allowed modifiers are defined in perms.py in each of the applications.

Advanced validation for CRUD

If validation logic is based on the payload of request (not user role/endpoint), pre_save and pre_delete methods of a ViewSet should be used.

Managed entities

Overview

Managed entities are entities for which NodeConductor’s database is considered an authoritative source of information. By means of REST API a user defines the desired state of the entities. NodeConductor’s jobs are then executed to make the backend (OpenStack, GitHub, JIRA, etc) reflect the desired state as close as possible.

Since making changes to a backend can take a long time, they are done in background tasks.

Here’s a proper way to deal with managed entities:

  • within the scope of REST API request:
  1. introduce the change (create, delete or edit an entity) to the NodeConductor’s database;
  2. schedule a background job passing instance id as a parameter;
  3. return a positive HTTP response to the caller.
  • within the scope of background job:
  1. fetch the entity being changed by its instance id;
  2. make sure that it is in a proper state (e.g. not being updated by another background job);
  3. transactionally update the its state to reflect that it is being updated;
  4. perform necessary calls to backend to synchronize changes from NodeConductor’s database to that backend;
  5. transactionally update the its state to reflect that it not being updated anymore.

Using the above flow makes it possible for user to get immediate feedback from an initial REST API call and then query state changes of the entity.

Managed entities operations flow

  1. View receives request for entity change.
  2. If request contains any data - view passes request to serializer for validation.
  3. View extracts operations specific information from validated data and saves entity via serializer.
  4. View starts executor with saved instance and operation specific information as input.
  5. Executor handles entity states checks and transition.
  6. Executor schedules celery tasks to perform asynchronous operations.
  7. View returns response.
  8. Tasks asynchronously call backend methods to perform required operation.
  9. Callback tasks changes instance state after backend method execution.

Simplified schema of operations flow

View —> Serializer —> View —> Executor —> Tasks —> Backend

Event logging

Event log entries is something an end user will see. In order to improve user experience the messages should be written in a consistent way.

Here are the guidelines for writing good log events.

  • Use present perfect passive for the message.

    Right: Environment %s has been created.

    Wrong: Environment %s was created.

  • Build a proper sentence: start with a capital letter, end with a period.

    Right: Environment %s has been created.

    Wrong: environment %s has been created

  • Include entity names into the message string.

    Right: User %s has gained role of %s in project %s.

    Wrong: User has gained role in project.

  • Don’t include too many details into the message string.

    Right: Environment %s has been updated.

    Wrong: Environment has been updated with name: %s, description: %s.

  • Use the name of an entity instead of its __str__.

    Right: event_logger.info('Environment %s has been updated.', env.name)

    Wrong: event_logger.info('Environment %s has been updated.', env)

  • Don’t put quotes around names or entity types.

    Right: Environment %s has been created.

    Wrong: Environment "%s" has been created.

  • Don’t capitalize entity types.

    Right: User %s has gained role of %s in project %s.

    Wrong: User %s has gained Role of %s in Project %s.

  • For actions that require background processing log both start of the process and its outcome.

    Success flow:

    1. log Environment %s creation has been started. within HTTP request handler;
    2. log Environment %s has been created. at the end of background task.

    Failure flow:

    1. log Environment %s creation has been started. within HTTP request handler;
    2. log Environment %s creation has failed. at the end of background task.
  • For actions that can be processed within HTTP request handler log only success.

    Success flow:

    log User %s has been created. at the end of HTTP request handler.

    Failure flow:

    don’t log anything, since most of the errors that could happen here are validation errors that would be corrected by user and then resubmitted.

Quotas application

Overview

TODO: This documentation is out of date. We need to update it. quotas - Django application that provides implementation of per object resource limits and usages.

Base model with quotas

Base model with quotas have to inherit QuotaModelMixin and define QUOTAS_NAMES attribute as list of all object quotas names. Also add_quotas_to_scope handler has to be connected to object post save signal for quotas creation.

# in models.py

class MyModel(QuotaModelMixin, models.Model):
    # ...
    QUOTAS_NAMES = ['quotaA', 'quotaB' ...]

# in apps.py

signals.post_save.connect(
        quotas_handlers.add_quotas_to_scope,
        sender=MyModel,
        dispatch_uid='nodeconductor.myapp.handlers.add_quotas_to_mymodel',
    )

Note that quotas can only be created in add_quotas_to_scope handler. They can not be added anywhere else in the code. This assures that objects of the same model will have the same quotas.

Change object quotas usage and limit

To edit objects quotas use:

  • set_quota_limit - replace old quota limit with new one
  • set_quota_usage - replace old quota usage with new one
  • add_quota_usage - add value to quota usage

Do not edit quotas manually, because this will break quotas in objects ancestors.

Parents for object with quotas

Object with quotas can have quota-parents. If usage in child was increased - it will be increased in parent too. Method get_quota_parents have to be overridden to return list of quota-parents if object has any of them. Only first level of ancestors has be added as parents, for example if membership is child of project and project is child if customer - memberships get_quota_parents has to return only project, not customer. It is not necessary for parents to have the same quotas as children, but logically they should have at least one common quota.

Check is quota exceeded

To check is one separate quota exceeded - use is_exceeded method of quota. It can receive usage delta or threshold and check is quota exceeds considering delta and/or threshold.

To check is any of object or his ancestors quotas exceeded - use validate_quota_change method of object with quotas. This method receive dictionary of quotas usage deltas and returns errors if one or more quotas of object or his quota-ancestors exceeded.

Get sum of quotas

QuotasModelMixin provides get_sum_of_quotas_as_dict methods which calculates sum of each quotas for given scopes.

Allow user to edit quotas

Will be implemented soon.

Add quotas to quota scope serializer

QuotaSerializer can be used as quotas serializer in quotas scope controller.

Sort objects by quotas with django_filters.FilterSet

Inherit your FilterSet from QuotaFilterMixin and follow next steps to enable ordering by quotas.

Usage:

1. Add quotas__limit and -quotas__limit to filter meta order_by attribute if you want order by quotas limits and quotas__usage, -quota__usage if you want to order by quota usage.

2. Add quotas__<limit or usage>__<quota_name> to meta order_by attribute if you want to allow user to order <quota_name>. For example quotas__limit__ram will enable ordering by ram quota.

Ordering can be done only by one quota at a time.

QuotaInline for admin models

quotas.admin contains generic inline model``QuotaInline``, which can be used as inline model for any quota scope.

Global count quotas for models

Global count quota - quota without scope that stores information about count of all model instances. To create new global quota - add field GLOBAL_COUNT_QUOTA_NAME = ‘<quota name>’ to model. (Please use prefix <nc_global> for global quotas names)

Workflow for quota allocation

In order to prevent bugs when multiple simultaneous requests are performed, the following workflow is used.

  1. As soon as we know what quota will be used we increase its usage. It is performed in serializers’ save or update method. If quota usage becomes over limit, validation error is raised. Consider for example InstanceFlavorChangeSerializer in OpenStack plugin.
  2. If backend API call for resource provision fails, frontend quota usage is not modified. Instead it is assumed that quota pulling is triggered either by user or by cron.
  3. Quota usage is decreased only when backend API call for resource deletion succeeds. Consider for example delete_volume backend method in OpenStack plugin.

Tasks and executors

NodeConductor performs logical operations using executors that combine several tasks.

Executors

Executor represents a logical operation on a backend, like VM creation or resize. It executes one or more background tasks and takes care of resource state updates and exception handling.

Tasks

There are 3 types of task queues: regular (used by default), heavy and background.

Regular tasks

Each regular task corresponds to a particular granular action - like state transition, object deletion or backend method execution. They are supposed to be combined and called in executors. It is not allowed to schedule tasks directly from views or serializer.

Heavy tasks

If task takes too long to complete, you should try to break it down into smaller regular tasks in order to avoid flooding general queue. Only if backend does not allow to do so, you should mark such tasks as heavy so that they use separate queue.

@shared_task(is_heavy_task=True)
def heavy(uuid=0):
    print '** Heavy %s' % uuid

Throttle tasks

Some backends don’t allow to execute several operations concurrently within the same scope. For example, one OpenStack settings does not support provisioning of more than 4 instances together. In this case task throttling should be used.

Background tasks

Tasks that are executed by celerybeat should be marked as “background”. To mark task as background you need to inherit it from core.BackgroundTask:

from nodeconductor.core import tasks as core_tasks
class MyTask(core_tasks.BackgroundTask):
    def run(self):
        print '** background task'

Explore BackgroundTask to discover background tasks features.

How to write imports

Grouping

  1. Imports from __future__.
  2. Default python modules.
  3. Related third party imports.
  4. Imports from installed nodeconductor modules.
  5. Local application imports.

Example:

from __future__ import unicode_literals

import datetime

from django.conf import settings
from model_utils import FieldTracker

from nodeconductor.core import models as core_models, exceptions as core_exceptions
from nodeconductor.structure import models as structure_models
from nodeconductor_openstack.openstack import models as openstack_models

from nodeconductor_assembly_waldur.packages import models as package_models
from . import utils, managers

Ordering

In each group imports should be ordered in alphabetical order, regardless import keyword.

Wrong:

from os import path
import datetime

Right:

import datetime
from os import path

(Because we ignore keywords ‘from’, ‘import’ and should order by package name)

Other rules

  1. Use relative import when you are importing module from the same application.

In openstack plugin:

Wrong:

from nodeconductor_openstack.openstack import models

Right:

from . import models
  1. Group import from one module in one line.

Wrong:

from nodeconductor.core import models as core_models
from nodeconductor.core import exceptions as core_exceptions

Right:

from nodeconductor.core import models as core_models, exceptions as core_exceptions

Suggestions

1. It is suggested to import whole modules from nodeconductor plugin, not only separate classes.

Wrong:

from nodeconductor.structure.models import Project

Right:

from nodeconductor.structure import models as structure_models

How to write tests

Application tests structure

Application tests should follow next structure:

  • /tests/ - folder for all application tests.
  • /tests/test_my_entity.py - file for API calls tests that are logically related to entity. Example: test calls for project CRUD + actions.
  • /tests/test_my_entity.py:MyEntityActionTest - class for tests that are related to particular endpoint. Examples: ProjectCreateTest, InstanceResizeTest.
  • /tests/unittests/ - folder for unittests of particular file.
  • /tests/unittests/test_file_name.py - file for test of classes and methods from application file “file_name”. Examples: test_models.py, test_handlers.py.
  • /tests/unittests/test_file_name.py:MyClassOrFuncTest - class for test that is related to particular class or function from file. Examples: ProjectTest, ValidateServiceTypeTest.

Tips for writing tests

  • cover important or complex functions and methods with unittests;
  • write at least one test for a positive flow for each endpoint;
  • do not write tests for actions that does not exist. If you don’t support “create” action for any user there is no need to write test for that;
  • use fixtures (module fixtures.py) to generate default structure.

How to override settings in unit tests

Don’t manipulate django.conf.settings directly as Django won’t restore the original values after such manipulations. Instead you should use standard context managers and decorators. They change a setting temporarily and revert to the original value after running the testing code. If you modify settings directly, you break test isolation by modifying global variable.

If configuration setting is not plain text or number but dictionary, and you need to update only one parameter, you should take whole dict, copy it, modify parameter, and override whole dict.

Wrong:

with self.settings(NODECONDUCTOR={'INVITATION_LIFETIME': timedelta(weeks=1)}):
    tasks.cancel_expired_invitations()

Right:

nodeconductor_settings = settings.NODECONDUCTOR.copy()
nodeconductor_settings['INVITATION_LIFETIME'] = timedelta(weeks=1)

with self.settings(NODECONDUCTOR=nodeconductor_settings):
    tasks.cancel_expired_invitations()

Running tests

In order to run unit tests for specific module please execute the following command. Note that you should substitute module name instead of example nodeconductor_openstack. Also it is assumed that you’ve already activated virtual Python environment.

DJANGO_SETTINGS_MODULE=nodeconductor.server.test_settings nodeconductor test nodeconductor_openstack

How to write docs

Documentation for sysadmins

Documentation for sysadmins should contain a description of settings that allows to setup and customize WaldurMasterMind. It should be located in wiki.

Documentation for developers

If documentation describes basic concepts that are not related to any particular part of code it should be located in /docs folder. All other documentation for for developers should be located in code.

Tips for writing docs:
  • add description for custom modules that are unique for particular plugin;
  • add description to base class methods that should be implemented by other developers;
  • don’t add obvious comments for standard objects or parameters.

API documentation

TODO.

Internationalization

Per-request internationalization is enabled by default for English and Estonian languages. Client requests will respect the Accept-Language.

Here are the guidelines for specifing translation strings:

  • Build a proper sentence: start with a capital letter, end with a period.

    Right: _('Deletion was scheduled.')

    Wrong: _('deletion was scheduled')

  • Use named-string interpolation instead of positional interpolation if message has several parameters.

    Right: _('Operation was successfully scheduled for %(count)d instances: %(names)s.')

    Wrong: _('Operation was successfully scheduled for %s instances: %s.')

  • help_text, verbose_name, exception message and response message should be marked, but don’t mark message template for event or alert logger.