VPP  0.8
A high-level modern C++ API for Vulkan
General overview of rendering with VPP

There are several key concepts to familiarize with to implement a rendering engine with VPP (and also core Vulkan):

  1. A render graph: defines the data flow. This is a directed acyclic graph. The graph consists of two kinds of nodes. vpp::Process nodes represent rendering processes - programs which generate or transform images somehow. vpp::Attachment nodes represent images the processes operate on. A process can output images, but also consume images (possibly rendered by another process in chain). In such case, a dependency arc is being created between the processes to ensure that pixels needed by latter process are already generated by the former process.

    To define a render graph, derive a class from vpp::RenderGraph base class. Read the appropriate documentation sections for more detail. For simple applications you will likely need only a single process.

  2. A rendering pipeline. This object defines how single process operate. It is the core abstraction in rendering. It encapsulates all the needed information, which is divided into the following aspects:

    • Vertex sources.
    • Auxiliary data sources.
    • Output data targets.
    • Shaders.

    A vertex source is a mechanism to pass geometry data to the rendering pipeline. Vulkan renders geometry arranged into points, lines, triangles, or tessellation patches. Defining coordinates of these primitives are being passed in vertex buffers. You must define format for this data. For more info on this topic, read more about vpp::VertexStruct, vpp::InstanceStruct, vpp::Attribute, vpp::inVertexData, vpp::VertexBufferView, vpp::gvector.

    Auxiliary data sources allows you to supply more data for your rendering engine: textures, view and projection parameters, constants, etc. This kind of data is being passed to the pipeline through binding points. VPP offers a number of binding point classes for various types of resources, e.g.: vpp::inUniformBuffer, vpp::inUniformBufferDyn, vpp::ioBuffer, vpp::ioBufferDyn, vpp::ioImageBuffer, vpp::inTextureBuffer, vpp::ioImage, vpp::inSampler, vpp::inConstSampler, vpp::inTexture, vpp::inSampledTexture, vpp::inConstSampledTexture. You can then bind actual data buffers to binding points.

    Data buffers are represented by vpp::Buffer and vpp::Image subclasses (including very convenient vpp::gvector template). Bindings are stored inside special small objects of a class called vpp::ShaderDataBlock. You can quickly bind/unbind entire vpp::ShaderDataBlock (containing multiple resources) between draw calls.

    Shaders are actual programs being run on the GPU. Shaders receive data from vertex sources, bound resources, images rendered by processes earlier in the chain (aka input attachments). They write pixels and other data to process output images (aka output attachments) or bound writable buffers. VPP provides the following classes for shader support: vpp::vertexShader, vpp::geometryShader, vpp::tessControlShader, vpp::tessEvalShader, vpp::fragmentShader, vpp::computeShader. You write shaders directly in C++ and bind the routines to objects of these classes.

    Because VPP found a way to execute code written in C++ on the GPU, it also provides a lot of support classes to GPU-level coding. These are, among others:

    In general, these functions and classes form a set similar to GLSlang library.

    There are some more mechanisms (e.g. push constants and queries) useful in specific situations.

    In order to define a rendering pipeline class, derive your class from vpp::PipelineConfig or vpp::ComputePipelineConfig.

  3. Rendering options and parameters. There is a lot of important parameters affecting rendering process in a pipeline. For example: whether to draw filled polygons or wireframe? Do we need to discard polygons facing backwards? Is the Z buffer used to determine visibility? Is the stencil buffer used?

    VPP defines a unified container for all such options in the form of vpp::RenderingOptions class.

    One of important rendering options is the viewport configuration. VPP provides vpp::Viewport class to store viewport dimensions. vpp::Viewport objects are registered in vpp::RenderingOptions object.

  4. Interfacing with the window system displaying images on screen. See vpp::Surface and vpp::SwapChain for more details.
  5. Control mechanisms - they specify how to put things in motion. Among these are: queues, command buffers, commands, synchronization primitives.

    One of the simplest high level class belonging to this category is vpp::RenderManager which encapsulates all the mechanisms inside and hides them under simple interface. This class can be useful to develop examples, small applications, quick experiments with rendering graphs and pipelines, etc.

    For more advanced graphic engines, you will need to interact with Vulkan concepts directly. VPP provides convenient C++ wrappers over them. See the docs for classes: vpp::Queue, vpp::CommandBuffer, vpp::CommandBufferPool, vpp::CommandBufferRecorder, vpp::Semaphore, vpp::Fence, vpp::Event. VPP also provides extensive list of commands, corresponding to Vulkan rendering commands, and in some cases being higher level wrappers over them. See the docs for classes: vpp::NonRenderingCommands, vpp::UniversalCommands, vpp::ExtendedCommands for more info on commands.

  6. Device information. This is an important part - to know what rendering device the system is utilising and what are its capabilities. vpp::PhysicalDevice class allows to query for that information. vpp::Device class represents the logical device (GPU) and is a parameter to most VPP functions. In case the system has several devices (multiple GPUs), you can work on all off them in parallel by using multiple vpp::Device objects.