EpicGames.Horde
Identifier for a user account
Id to construct from
Identifier for a user account
Id to construct from
Id to construct from
Converter to and from instances.
Creates a new user account
Name of the user
Perforce login identifier
Claims for the user
Description for the account
User's email address
Password for the user
Whether the account is enabled
Creates a new user account
Name of the user
Perforce login identifier
Claims for the user
Description for the account
User's email address
Password for the user
Whether the account is enabled
Name of the user
Perforce login identifier
Claims for the user
Description for the account
User's email address
Password for the user
Whether the account is enabled
Creates the admin account
Password for the user
Creates the admin account
Password for the user
Password for the user
Response from the request to create a new user account
The created account id
Response from the request to create a new user account
The created account id
The created account id
Update request for a user account
Name of the user
Perforce login identifier
Claims for the user
Description for the account
User's email address
Optional secret token for API access
Password for the user
Whether the account is enabled
Update request for a user account
Name of the user
Perforce login identifier
Claims for the user
Description for the account
User's email address
Optional secret token for API access
Password for the user
Whether the account is enabled
Name of the user
Perforce login identifier
Claims for the user
Description for the account
User's email address
Optional secret token for API access
Password for the user
Whether the account is enabled
Update request for the current user account
Old password for the user
New password for the user
Update request for the current user account
Old password for the user
New password for the user
Old password for the user
New password for the user
Creates a new user account
Id of the account
Name of the user
Perforce login identifier
Claims for the user
Description for the account
User's email address
Whether the account is enabled
Creates a new user account
Id of the account
Name of the user
Perforce login identifier
Claims for the user
Description for the account
User's email address
Whether the account is enabled
Id of the account
Name of the user
Perforce login identifier
Claims for the user
Description for the account
User's email address
Whether the account is enabled
Message describing a claim for an account
Claim type
Value of the claim
Message describing a claim for an account
Claim type
Value of the claim
Claim type
Value of the claim
Dashboard login request
Username
Password
Url to return to upon success
Dashboard login request
Username
Password
Url to return to upon success
Username
Password
Url to return to upon success
Gets all entitlements for an account
Whether the user is an administrator
List of scopes with entitlements
Gets all entitlements for an account
Whether the user is an administrator
List of scopes with entitlements
Whether the user is an administrator
List of scopes with entitlements
Creates a new user account
Name of the scope
Actions for this scope
Creates a new user account
Name of the scope
Actions for this scope
Name of the scope
Actions for this scope
Wraps a string used to describe an ACL action
Name of the action
Wraps a string used to describe an ACL action
Name of the action
Name of the action
Type converter for NamespaceId to and from JSON
Type converter from strings to NamespaceId objects
Name of an ACL scope
Name of an ACL scope
The root scope name
Append another name to this scope
Name to append
New scope name
Serializes objects to JSON
Converts objects to strings
Claim for an ACL
Type of the claim
Value for the claim
Normalized hostname of an agent
The text representing this id
Constructor
Hostname of the agent
Type converter from strings to PropertyFilter objects
Class which serializes AgentId objects to JSON
Status of an agent. Must match RpcAgentStatus.
Unspecified state.
Agent is running normally.
Agent is currently shutting down, and should not be assigned new leases.
Agent is in an unhealthy state and should not be assigned new leases.
Agent is currently stopped.
Agent is busy performing other work (eg. serving an interactive user)
Parameters to update an agent
Whether the agent is currently enabled
Whether the agent is ephemeral
Request a conform be performed using the current agent set
Request that a full conform be performed, removing all intermediate files
Request the machine be restarted
Request the machine be shut down
Request the machine be restarted without waiting for leases to complete
Pools for this agent
New comment
Parameters to update an agent
Whether the agent is currently enabled
Whether the agent is ephemeral
Request a conform be performed using the current agent set
Request that a full conform be performed, removing all intermediate files
Request the machine be restarted
Request the machine be shut down
Request the machine be restarted without waiting for leases to complete
Pools for this agent
New comment
Whether the agent is currently enabled
Whether the agent is ephemeral
Request a conform be performed using the current agent set
Request that a full conform be performed, removing all intermediate files
Request the machine be restarted
Request the machine be shut down
Request the machine be restarted without waiting for leases to complete
Pools for this agent
New comment
Response for queries to find a particular lease within an agent
Identifier for the lease
Identifier for the parent lease. Used to terminate hierarchies of leases.
The agent id
Cost of this agent, per hour
Name of the lease
Log id for this lease
Time at which the lease started (UTC)
Time at which the lease started (UTC)
Whether this lease has started executing on the agent yet
Details of the payload being executed
Outcome of the lease
State of the lease (for AgentLeases)
Response for queries to find a particular lease within an agent
Identifier for the lease
Identifier for the parent lease. Used to terminate hierarchies of leases.
The agent id
Cost of this agent, per hour
Name of the lease
Log id for this lease
Time at which the lease started (UTC)
Time at which the lease started (UTC)
Whether this lease has started executing on the agent yet
Details of the payload being executed
Outcome of the lease
State of the lease (for AgentLeases)
Identifier for the lease
Identifier for the parent lease. Used to terminate hierarchies of leases.
The agent id
Cost of this agent, per hour
Name of the lease
Log id for this lease
Time at which the lease started (UTC)
Time at which the lease started (UTC)
Whether this lease has started executing on the agent yet
Details of the payload being executed
Outcome of the lease
State of the lease (for AgentLeases)
Information about a workspace synced on an agent
The Perforce server and port to connect to
User to log into Perforce with (eg. buildmachine)
Identifier to distinguish this workspace from other workspaces
The stream to sync
Custom view for the workspace
Whether to use an incremental workspace
Method to use when syncing/materializing data from Perforce
Minimum disk space that must be available *after* syncing this workspace (in megabytes)
Threshold for when to trigger an automatic conform of agent. Measured in megabytes free on disk
Information about a workspace synced on an agent
The Perforce server and port to connect to
User to log into Perforce with (eg. buildmachine)
Identifier to distinguish this workspace from other workspaces
The stream to sync
Custom view for the workspace
Whether to use an incremental workspace
Method to use when syncing/materializing data from Perforce
Minimum disk space that must be available *after* syncing this workspace (in megabytes)
Threshold for when to trigger an automatic conform of agent. Measured in megabytes free on disk
The Perforce server and port to connect to
User to log into Perforce with (eg. buildmachine)
Identifier to distinguish this workspace from other workspaces
The stream to sync
Custom view for the workspace
Whether to use an incremental workspace
Method to use when syncing/materializing data from Perforce
Minimum disk space that must be available *after* syncing this workspace (in megabytes)
Threshold for when to trigger an automatic conform of agent. Measured in megabytes free on disk
Information about an agent
The agent's unique ID
Friendly name of the agent
Whether the agent is currently enabled
Status of the agent
Cost estimate per-hour for this agent
The current session id
Whether the agent is ephemeral
Whether the agent is currently online
Whether this agent has expired
Whether a conform job is pending
Whether a full conform job is pending
Whether a restart is pending
Whether a shutdown is pending
The reason for the last shutdown
Last time a conform was attempted
Number of times a conform has been attempted
Last time a conform was attempted
The current client version
Properties for the agent
Resources for the agent
Last update time of this agent
Last time that the agent was online
Pools for this agent
Capabilities of this agent
Array of active leases.
Current workspaces synced on the agent
Comment for this agent
Information about an agent
The agent's unique ID
Friendly name of the agent
Whether the agent is currently enabled
Status of the agent
Cost estimate per-hour for this agent
The current session id
Whether the agent is ephemeral
Whether the agent is currently online
Whether this agent has expired
Whether a conform job is pending
Whether a full conform job is pending
Whether a restart is pending
Whether a shutdown is pending
The reason for the last shutdown
Last time a conform was attempted
Number of times a conform has been attempted
Last time a conform was attempted
The current client version
Properties for the agent
Resources for the agent
Last update time of this agent
Last time that the agent was online
Pools for this agent
Capabilities of this agent
Array of active leases.
Current workspaces synced on the agent
Comment for this agent
The agent's unique ID
Friendly name of the agent
Whether the agent is currently enabled
Status of the agent
Cost estimate per-hour for this agent
The current session id
Whether the agent is ephemeral
Whether the agent is currently online
Whether this agent has expired
Whether a conform job is pending
Whether a full conform job is pending
Whether a restart is pending
Whether a shutdown is pending
The reason for the last shutdown
Last time a conform was attempted
Number of times a conform has been attempted
Last time a conform was attempted
The current client version
Properties for the agent
Resources for the agent
Last update time of this agent
Last time that the agent was online
Pools for this agent
Capabilities of this agent
Array of active leases.
Current workspaces synced on the agent
Comment for this agent
Telemetry data for an agent
Telemetry data for an agent
Telemetry data sample
Telemetry data sample
Updates an existing lease
Updates an existing lease
Information about an agent pending admission to the farm
Unique key used to identify this agent
Agent host name
Description for the agent
Information about an agent pending admission to the farm
Unique key used to identify this agent
Agent host name
Description for the agent
Unique key used to identify this agent
Agent host name
Description for the agent
Approve an agent for admission to the farm
Agents to approve
Approve an agent for admission to the farm
Agents to approve
Agents to approve
Approve an agent for admission to the farm
Unique key for identifying the machine
Agent id to use for the machine. Set to null to use the default.
Approve an agent for admission to the farm
Unique key for identifying the machine
Agent id to use for the machine. Set to null to use the default.
Unique key for identifying the machine
Agent id to use for the machine. Set to null to use the default.
Well-known property names for agents
The agent id
The UBT platform enum
The UBT platform group enum
The operating system (Linux, MacOS, Windows)
Compatible operating system (mainly for Linux WINE agents to advertise Windows support)
Whether the agent is a .NET self-contained app
Pools that this agent belongs to
Pools requested by the agent to join when registering with server
The total size of storage space on drive, in bytes
Amount of available free space on drive, in bytes
IP address used for sending compute task payloads
Port used for sending compute task payloads
Protocol version for compute task payloads
AWS: Instance ID
AWS: Instance type
Whether the Wine compatibility layer is enabled (for running Windows applications on Linux)
Whether the agent is trusted
Standard resource names
Number of logical cores
Amount of RAM, in GB
Identifier for a user
Id to construct from
Identifier for a user
Id to construct from
Id to construct from
Converter to and from instances.
Outcome from a lease. Values must match lease_outcome.proto.
Default value.
The lease was executed successfully
The lease was not executed succesfully, but cannot be run again.
The lease was cancelled by request
State of a lease. Values must match lease_state.proto.
Default value.
Set by the server when waiting for an agent to accept the lease. Once processed, the agent should transition the lease state to active.
The agent is actively working on this lease.
The agent has finished working on this lease.
Set by the server to indicate that the lease should be cancelled.
Updates an existing lease
Mark this lease as aborted
Identifier for a pool
Id to construct from
Identifier for a pool
Id to construct from
Id to construct from
Constructor
Converter to and from instances.
Identifier for a session
Id to construct from
Identifier for a session
Id to construct from
Id to construct from
Default empty value for session id
Converter to and from instances.
Information about a session
Unique id for this session
Start time for this session
Finishing time for this session
Version of the software running during this session
Information about a session
Unique id for this session
Start time for this session
Finishing time for this session
Version of the software running during this session
Unique id for this session
Start time for this session
Finishing time for this session
Version of the software running during this session
Response data for a utilization request
Start hour
End hour
List of pools
Total admin time
Total hibernating time
Total agents
Representation of an hour of time
Pool id
Number of agents in this pool
Total time spent doing admin work
Time spent hibernating
Total time agents in this pool were doing work for other pools
List of streams
Represents one stream in one pool in one hour of telemetry
Stream Id
Total time
Class which describes an artifact on Horde which can be serialized to JSON.
Name of the artifact
Type of the artifact
Artifact description
Base URL for downloading from
Name of the ref to download
Optional URL to the job that produced this artifact
Keys associated with the artifact
Metadata associated with the artifact
Filter for the files selected for download
Default constructor
Constructor
Deserialize a descriptor from utf8 bytes
Data to deserialize from
New descriptor instance
Writes this descriptor to storage
File to read from
Cancellation token for the operation
Serializes a descriptor
Data for the serialized descriptor
Writes this descriptor to storage
File to write to
Cancellation token for the operation
Storage backend that utilizes the local filesystem
Constructor
The backing object store
Logger interface
Constructor
Gets the path for storing a file on disk
Maps a file into memory for reading, and returns a handle to it
Path to the file
Offset of the data to retrieve
Length of the data
Handle to the data. Must be disposed by the caller.
Reads a ref from a file on disk
Enumerate all the refs in a file store
Root directory to search
Storage backend which communicates with the Horde server over HTTP for content
Constructor
Factory for constructing HttpStorageBackend instances
Constructor
Creates a new HTTP storage backend
Base path for all requests
Custom access token to use for requests
Creates a new HTTP storage backend
Namespace to create a client for
Custom access token to use for requests
Whether to enable the backend cache, which caches full bundles to disk
Storage backend which communicates with Jupiter over HTTP
Constructor
In-memory implementation of a storage backend
All data stored by the client
Accessor for all refs stored by the client
Request for a blob to be read
Reference to the blob data
Request for a blob to be read
Reference to the blob data
Reference to the blob data
Response from a read request
Response from a read request
Stats for a batch read
Stats for a batch read
Implements an efficient pipeline for streaming blob data
Number of items to read from the input queue before partitioning into batches
Maximum gap between reads that should be coalesced and executed together
Whether to verify hashes of data read from storage
Constructor
Overridable dispose method
Gets stats for the reader
Adds a batch reader to the given pipeline
Handle responses from the read
Responses from the read
Cancellation token for the operation
Request for a chunk to be read before being written to disk
File to read from
Offset within the output file
Handle to the blob data
Request for a chunk to be read before being written to disk
File to read from
Offset within the output file
Handle to the blob data
File to read from
Offset within the output file
Handle to the blob data
Implementation of designed for reading leaf chunks of data from storage
Maximum number of exports to write in a single request
Constructor
Represents an output file that write requests may be issued against
The relative output path
File metadata
File entry with metadata about the target file
Constructor
Opens the file for writing, setting its length on the first run
Callback for a file having been fully written
Request to output a block of data A chunk which needs to be written to an output file
Request to output a block of data A chunk which needs to be written to an output file
Batches file write requests
Number of files that have been written
Total number of bytes that have been written
Number of writes to execute sequentially vs in parallel
If false, disables output to disk. Useful for performance testing.
If true, hashes output files after writing to verify their integrity
If true, outputs verbose information to the log
Sink for write requests
Constructor
Dispose remaining items in the queue
Adds tasks for the writer to an async pipeline
Pipeline instance
Number of parallel writes
Processes requests from the given input channel
Cancellation token for the operation
Data for an alias in the storage system. An alias is a named weak reference to a node.
Handle to the target blob for the alias
Rank for the alias
Data stored inline with the alias
Data for an alias in the storage system. An alias is a named weak reference to a node.
Handle to the target blob for the alias
Rank for the alias
Data stored inline with the alias
Handle to the target blob for the alias
Rank for the alias
Data stored inline with the alias
Data for an alias in the storage system. An alias is a named weak reference to a node.
Handle to the target blob for the alias
Rank for the alias
Data stored inline with the alias
Data for an alias in the storage system. An alias is a named weak reference to a node.
Handle to the target blob for the alias
Rank for the alias
Data stored inline with the alias
Handle to the target blob for the alias
Rank for the alias
Data stored inline with the alias
Attribute used to denote the converter for an object type
Type of the converter
Constructor
Base class for converter types that can serialize blobs from native C# types. Semantics mirror .
Determines if the converter can handle the given type
The type that needs to be converted
True if this converter can handle the given type
Serializer for typed values to blob data. Mimics the interface for familiarity.
Reads a strongly typed value
Writes a strongly typed value
Data for an individual node. Must be disposed after use.
Type of the blob
Raw data for the blob. Lifetime of this data is tied to the lifetime of the object; consumers must not retain references to it.
Handles to referenced blobs
Constructor
Overridable dispose method
True if derived instances should dispose managed resources. False when called from a finalizer.
Implementation of for instances.
Constructor
Identifier for a blob within a particular namespace.
Accessor for the internal path string
Constructor
Path to the blob. The meaning of this string is implementation defined.
Constructor
Path to the blob. The meaning of this string is implementation defined.
Constructor
The base locator to append to
Characters to append
Constructor
The base locator to append to
Characters to append
Whether the blob locator is valid
The base blob locator
Fragment within the base blob
Determines if this locator can be unwrapped into an outer locator/fragment pair
Split this locator into a locator and fragment
Receives the base blob locator
Receives the blob fragment
True if the locator was unwrapped, false otherwise
Checks whether this blob is within the given folder
Name of the folder
True if the the blob id is within the given folder
Type converter from strings to BlobLocator objects
Class which serializes BlobLocator objects to JSON
Request to read a blob from storage
Request to read a blob from storage
Response from reading a blob from storage
Response from reading a blob from storage
Batch of responses from the reader
Batch of responses from the reader
Options for
Maximum number of responses to buffer before pausing.
Number of requests to enumerate before flushing the current batch
Number of batches to fetch in parallel
Helper class to sort requested reads for optimal coherency within bundles
Type of user data to include with requests
Constructor
Constructor
Adds a new read request
The request to add
Adds a new request source
Method to construct the sequence of items
Indicate that we've finished adding new items to the reader
Reads all responses from the reader
Cancellation token for the operation
Attempts to read a batch from the queue
Batch of responses
Waits until there is data available to read
Cancellation token for the operation
Handles serialization of blobs using instances.
Deserialize an object
Return type for deserialization
Data to deserialize from
Options to control serialization
Serialize an object into a blob
Type of object to serialize
Writer for the blob data
Object to serialize
Options to control serialization
Type of the serialized blob
Options for serializing blobs
Known converter types
Default options instance
Constructor
Create a read-only version of these options
Gets a converter for the given type
Creates options for serializing blobs compatible with a particular server API version
The server API version
Extension methods for serializing blob types
Deserialize an object
Return type for the deserialized object
Handle to the blob to deserialize
Options to control serialization
Cancellation token for the operation
Deserialize an object
Return type for the deserialized object
Handle to the blob to deserialize
Cancellation token for the operation
Deserialize an object
Return type for the deserialized object
Handle to the blob to deserialize
Cancellation token for the operation
Serialize an object to storage
Writer for serialized data
The object to serialize
Cancellation token for the operation
Handle to the serialized blob
Reads data for a ref from the store, along with the node's contents.
Store instance to write to
The ref name
Minimum coherency for any cached value to be returned
Options to control serialization
Cancellation token for the operation
Node for the given ref, or null if it does not exist
Reads a ref from the store, throwing an exception if it does not exist
Store instance to write to
Id for the ref
Minimum coherency of any cached result
Options to control serialization
Cancellation token for the operation
The blob instance
Reads data for a ref from the store, along with the node's contents.
Store instance to write to
The ref name
Minimum coherency for any cached value to be returned
Options to control serialization
Cancellation token for the operation
Node for the given ref, or null if it does not exist
Reads a ref from the store, throwing an exception if it does not exist
Store instance to write to
Id for the ref
Minimum coherency of any cached result
Options to control serialization
Cancellation token for the operation
The blob instance
Identifies the type of a blob
Nominal identifier for the type
Version number for the serializer
Identifies the type of a blob
Nominal identifier for the type
Version number for the serializer
Nominal identifier for the type
Version number for the serializer
Number of bytes in a serialized blob type instance
Constructor
Deserialize a type from a byte span
Serialize to a byte span
Bundle version number
Initial version number
Added the BundleExport.Alias property
Back out change to include aliases. Will likely do this through an API rather than baked into the data.
Use data structures which support in-place reading and writing.
Add import hashes to imported nodes
Structure bundles as a sequence of self-contained packets (uses V2 code)
Last item in the enum. Used for
The current version number
Last version using the V1 pipeline
Last version using the V2 pipeline
Signature for a bundle
Version number for the following file data
Length of the initial header
Signature for a bundle
Version number for the following file data
Length of the initial header
Version number for the following file data
Length of the initial header
Number of bytes in a signature when serialized
Validates that the prelude bytes for a bundle header are correct
The signature bytes
Length of the header data, including the prelude
Writes a signature to the given memory
Options for creating a storage cache
Number of packet readers to keep in the cache
Size of a bundle page
Number of bundle pages to keep in the cache.
Size of the header cache
Size of the packet cache
Cache for reading bundle data.
Instance of an empty cache
Accessor for the default allocator
Size of a bundle page to keep in the cache
Size of the configured header cache
Size of the configured packet cache
Whether there is a packet cache present
Semaphore for writing new data
Constructor
Constructor
Options for the cache
Constructor
Options for the cache
Inner allocator to use. Will be wrapped in an allocator that tracks allocations against the cache's budget.
Gets stats for the cache
Adds a bundle info object to the cache
Try to read a bundle info object from the cache
Adds an encoded bundle packet to the cache
Try to read an encoded bundle packet from the cache
Adds a decoded bundle packet to the cache
Try to read a decoded bundle packet from the cache
Indicates the compression format in the bundle
Packets are uncompressed
LZ4 compression
Gzip compression
Oodle compression (Selkie)
Brotli compression
ZStandard compression
Utility methods for bundles
Compress a block of data into a newly allocated block of memory
Format for the compressed data
The data to compress
The compressed data
Compress a data packet
Format for the compressed data
The data to compress
Writer for output data
Length of the compressed data
Decompress a packet of data
Format of the compressed data
Compressed data
Buffer to receive the decompressed data
Base class for packet handles
Locator for this bundle
Constructor
Flush contents of the bundle to storage
Open the bundle for reading
Get data for a packet of data from the bundle
Attempt to get a locator for this bundle
Generic flushed bundle handle; can be either V1 or V2 format.
Constructor
Options for configuring a bundle serializer
Default options value
Maximum version number of bundles to write
Maximum payload size fo a blob
Compression format to use
Minimum size of a block to be compressed
Maximum amount of data to store in memory. This includes any background writes as well as bundles being built.
Base class for an implementation of , providing implementations for some common functionality using bundles.
Allocator which trims the cache to keep below a maximum size
Accessor for the storage backend
Cache for bundle data
Stats for reading bundles
Constructor
Creates a bundle storage namespace around a memory backend
Creates a bundle storage namespace around a memory backend
Creates a bundle storage namespace around a directory on the filesystem
Creates a bundle storage namespace around a directory on the filesystem
Helper method for GC which allows enumerating all references to other bundles
Helper method for GC which allows enumerating all references to other bundles
Identifier for the type of a section in the bundle header
List of custom types
Imports of other bundles
List of export headers
References to exports in other bundles
Packet headers
Merged export headers and references
Header for the contents of a bundle.
Maximum size for a section
Maximum number of exports from a single bundle
Maximum number of export refs from a single bundle
Signature bytes
Types for exports within this bundle
Bundles that we reference nodes in
Nodes exported from this bundle
List of data packets within this bundle
Constructor
Constructor
Writes data for this bundle to a sequence builder
Serialize the bundle to a byte array
Reads a bundle header from memory
Memory to read from
New header object
Reads a bundle header from a stream
Stream to deserialize from
Cancellation token for the operation
New header object
Reads a bundle header from a stream
File signature
Stream to deserialize from
Cancellation token for the operation
New header object
Construct a header from the given data encoded in the latest format
Version to serialize as
Stream to read from
Length of the header
Cancellation token for the operation
Collection of node types in a bundle
Constructor
Constructor
Reads a type collection from a block of memory
Reads a type collection from a stream
Gets the size of memory required to serialize a collection of types
Number of bytes required to serialize the types
Serializes a set of types to a fixed block of memory
Writer to serialize to
Collection of imported node references
Constructor
Constructor
Reads a collection from memory
Reads a collection from a stream
Measure the size of memory required to store a collection of import locators
Size in bytes of the output buffer
Serialize a collection of locators to memory
Writer for output data
Descriptor for a compression packet
Size of this structure when serialized
Compression format for this packet
Offset of the packet within the payload stream
Encoded length of the packet
Decoded length of the packet
Constructor
Compression format for the packet
Offset of the data within the payload stream
Size of the encoded data
Size of the decoded data
Read from a byte array
Serialize the struct to memory
Collection of information about packets in a bundle
Constructor
Constructor
Reads a collection from memory
Reads a collection from a stream
Measure the size of memory required to store a collection of packets
Size in bytes of the output buffer
Serialize a collection of packets to memory
Writer to serialize to
Entry for a node exported from an object
Number of bytes in a serialized export object
Raw data for this export
Raw data for the header
Hash of the node data
Type id of the node. Can be used to look up the type information from the bundle header.
Packet containing this export's data
Offset within the packet of the node data
Length of the node
References to other nodes
Constructor
Constructor
Writes a new export to a block of memory
Entry for a node exported from an object
Constructor
Constructor
Constructor
Reads an export collection from a stream
Serializes this collection as a single section
Measure the size of memory required to store a collection of exports
Size in bytes of the output buffer
Serialize a collection of exports to memory
Writer to serialize to
Measure the size of memory required to store a collection of export refs
Size in bytes of the output buffer
Serialize a collection of export refs to memory
Writer to serialize to
Reference to a node in another bundle
Index into the import table of the blob containing the referenced node. Can be -1 for references within the same bundle.
Node imported from the bundle
Reference to a node in another bundle
Index into the import table of the blob containing the referenced node. Can be -1 for references within the same bundle.
Node imported from the bundle
Index into the import table of the blob containing the referenced node. Can be -1 for references within the same bundle.
Node imported from the bundle
Number of bytes in the serialized object
Deserialize this object from memory
Serialize this object to memory
Collection of information about exported nodes
Data used to store this collection
Constructor
Constructor
Computed information about a bundle
Locator for the bundle
Bundle header
Length of the header. Required to offset packets from the start of the bundle
Constructor
Writes nodes from bundles in an instance.
Queued set of requests from a particular bundle
Encoded bundle packet queued to be read
Information about a pending read
Accessor for the cache
Constructor
Cache for data
Logger for output
Reads a bundle header
Locator for the bundle
Cancellation token for the operation
Information about the bundle
Reads decoded packet data from a bundle
Locator for the bundle to read
Index of the bundle packet
Cancellation token for the operation
Data for the packet
Reads encoded packet data from a bundle
Locator for the bundle to read
Index of the bundle packet
Cancellation token for the operation
Dtaa for the packet
Reads a bundle header from the queue
Bundle header to read
Cancellation token for the operation
Reads a node from a bundle
Locator for the bundle
Index of the export
Cancellation token for the operation
Node data read from the given bundle
Gets stats for the reader
Implementation of for nodes which can be read from storage
Constructor
Writes nodes of a tree to an , packed into bundles. Each instance is single threaded,
but multiple instances may be written to in parallel.
Constructor
Store to write data to
Reader for serialized node data
Base path for new nodes
Options for the writer
Optional logger for trace information
Copy constructor
Internal constructor
Mark this writer as complete, allowing its data to be serialized.
Gets an output buffer for writing.
Current size in the existing buffer that has been written to
Desired size of the returned buffer
Buffer to be written into.
Finish writing a node.
Type of the node that was written
Used size of the buffer
References to other nodes
Aliases for the node
Cancellation token for the operation
Handle to the written node
Flushes all the current nodes to storage
Cancellation token for the operation
Implements the primary storage writer interface for V2 bundles. Writes exports into packets, and flushes them to storage in bundles.
Compressed length of this bundle
Constructor
Helper method to check a precondition is valid at runtime, regardless of build configuration.
Handle class for blobs exported from a bundle
Accessor for the packet that this export is in
Export index within the packet
Constructor
Constructor
Attempt to parse an export index from the given fragment
Get an identifier for this export within the outer packet
Append a locator for this export to the given string builder
Appends an export identifier to the given string builder
Implementation of which also contains a hash
Constructor
Accessor for data structures stored into a serialized bundle packet.
Each raw packet contains:
- 8 bytes: Standard bundle signature. The length field specifies the size of the following data, including the signature itself.
- 4 bytes: Decoded packet length
- 1 byte: Compression format
- ?? bytes: Compressed packet data
After decoding, the packet contains the following:
- 8 bytes: Standard bundle signature. The length field specifies the size of the following data, including the signature itself.
- 4 bytes: offset of type table from the start of the packet
- 4 bytes: offset of import table from the start of the packet
- 4 bytes: offset of export table from the start of the packet
The type table is constructed as:
- 4 bytes: Number of entries
- 20 bytes * Number of entries: BlobType data
The import table is constructed as:
- 4 bytes: Number of entries
- 4 bytes * (Number of entries + 1): Offset of each entry from the start of the packet, with a sentinel value for the end of the last entry. Length of each entry is implicit by next entry - this entry.
The export index is constructed as:
- 4 bytes: Number of entries
- 4 bytes * (Number of entries + 1): Offset of each entry from the start of the packet, with a sentinel value for the end of the last entry. Length of each entry is implicit by next entry - this entry.
Each import is written as:
- VarInt: Base import index, with a +1 bias, or zero for 'none'
- Utf8 string containing fragment on top of base import, without a null terminator.
Each export is written as:
- 4 bytes: Length of payload
- ?? bytes: Payload data
- VarInt: Type index
- VarInt: Number of imports
- VarInt * Number of imports: Import index
Type for packet blobs
Constructor
Data for the packet
Accessor for the underlying packet data
Length of this packet
Gets the number of types in this packet
Gets a type from the packet
Gets the number of imports in this packet
Gets the locator for a particular import
Index of the import to retrieve
Gets the number of exports in this packet
Gets the bulk data for a particular export
Encodes a packet
Compression format for the encoded data
Writer for the encoded data
Decodes this packet
Raw data for an imported node
Base index for this locator
The utf8 fragment appended to the base index
Raw data for an imported node
Base index for this locator
The utf8 fragment appended to the base index
Base index for this locator
The utf8 fragment appended to the base index
Bias for indexes into the import table
Reads an import from a block of memory
Data for an exported node in a packet
Data for this export
Constructor
Gets the header for this export
Gets the payload for this export
Data for an exported node in a bundle
Index of the type for this export
Index of imports for this export
Constructor
Base class for packet handles
Bundle containing this packet
Reads an export from this packet
Append the identifier for this packet to the given string builder
Appends the locator to the given string builder
Handle to an packet within a bundle.
Offset of the packet within the bundle
Length of the packet within the bundle
Constructor
Constructor
Parse a fragment containing an offset and length
Reads an export from this packet
Index of the export
Cancellation token for the operation
Appends an identifier for a packet to the given buffer
Counters used for tracking operations performed by a BundleStorageNamespace
Utility class for constructing BlobData objects from a packet, caching any computed handles to other blobs.
Accessor for the underlying packet data
Constructor
Data for the packet
Owner for the packet data
Reads this packet in its entirety
Reads an export from this packet
Index of the export
Reads an export from this packet
Index of the export
Gets an import handle for the packet
Writes exports into a new bundle packet
Current length of the packet
Enumerate all imported bundle locators
Constructor
Gets the number of exports currently in this writer
Writes a new blob to this packet
Complete the current export
Size of data written to the export buffer
Index of the blob type
Indexes of the imported blobs
Reads data for a blob written to storage
Add a new blob type to be written
Type to add
Index of the type
Adds a new imported blob locator
Handle to add
Index of the import
Gets the import assigned to a particular index
Gets the import assigned to a particular index
Gets data to write new export
Size of data in the current buffer that has been written to
Aligns an offset to a power-of-2 boundary
Gets data to write new export
Gets an output span and updates the length accordingly
Mark the current packet as complete
Base class for implementations of a content defined chunker
Source channel.
Output channel. The order of items written to the source writer will be preserved in the output reader. Each
output item should be read completely (through calls to )
before reading the next.
Base class for input to a . Allows the chunker to read data into a buffer as required.
Length of the input data
Optional user specified data to be propagated to the output
Starts a task to read the next chunk of data. Note that this task may be called again before the task completes.
Buffer to store the data
Cancellation token for the operation
Implementation of for files on disk
Constructor
Implementation of for data in memory
Constructor
Enumerates chunks from an input file
Rolling hash for this chunk
Accessor for the chunk's data
User specified data from the
Moves to the next output chunk
Cancellation token for the operation
True if there was another output chunk
Simple serial implementation of content chunking
Constructor
Parallel implementation of
Constructor
Index of known nodes that can be used for deduplication.
Default value for maximum number of keys
Constructor
Add a blob to the cache
Type of the blob
Reference to the blob data
Gets stats for the copy operation
Extension methods for
Wraps a with a
Creates a dedupe writer
The store instance to read from
Maximum number of keys to include in the cache
Creates a writer using a refname as a base path
The store instance to read from
Ref name to use as a base path
Maximum number of keys to include in the cache
Adds a directory tree to the cache
Dedupe writer to operate on
Reference to the directory to add
Cancellation token for the operation
Adds a chunked data stream to the cache
Dedupe writer to operate on
Reference to a data node in the stream
Cancellation token for the operation
Stores a object in a fixed memory buffer for transport over the wire.
Type of the blob
The blob data payload
References to other blobs
Constructor
Encode a blob data object to a byte aray
Encode a blob data object to a byte aray
Gets the length of an encoded blob data object
Encodes blob data into an existing buffer
Database of file metadata, storing timestamps for trees of files and hashes for ranges within them. Implemented as a SQLite database.
Id value for the root directory
Creates a new in-memory database for file metadata
Creates a new database for metadata backed by a file on disk
Adds a single file to the database
Adds a set of files to the database
Finds all the files in a particular directory
Finds all the files in a particular directory
Finds all the files in a particular directory
Gets the full name of a file
Gets the full name of a directory
Removes a file and all its chunk metadata
Adds a record for a new file chunk
Adds multiple file chunk records
Gets a chunk row
Find all chunks with a particular hash and length
Finds all the chunks within a particular file
Remove all chunks for a particular file
Remove all chunks for a set of files
Adds a new directory to the collection
Adds multiple directories to the collection
Gets the definition for a particular directory
Gets the full name of a directory
Gets the full name of a directory
Finds all directories within a given parent directory
Removes a directory and all its subdirectories
Removes a directory and all its subdirectories
Removes the contents of a directory, without removing the directory itself
Removes all subdirectories and files starting at the given roots
Metadata for a file
Unique id for this file
Identifier for the directory containing this file
Name of the file
Last modified timestamp for the file
Length of the file
Metadata for a file
Unique id for this file
Identifier for the directory containing this file
Name of the file
Last modified timestamp for the file
Length of the file
Unique id for this file
Identifier for the directory containing this file
Name of the file
Last modified timestamp for the file
Length of the file
Default constructor
Constructor for new file rows
Metadata for a directory
Unique id for this directory
Parent directory identifier
Name of the directory
Metadata for a directory
Unique id for this directory
Parent directory identifier
Name of the directory
Unique id for this directory
Parent directory identifier
Name of the directory
Default constructor
Constructor for new directory rows
Metadata for a file chunk
Unique id for the row
Id of the file that this chunk belongs to
Starting offset within the file of this chunk
Length of the chunk
Hash of the chunk data
Metadata for a file chunk
Unique id for the row
Id of the file that this chunk belongs to
Starting offset within the file of this chunk
Length of the chunk
Hash of the chunk data
Unique id for the row
Id of the file that this chunk belongs to
Starting offset within the file of this chunk
Length of the chunk
Hash of the chunk data
Default constructor
Constructor for new chunk rows
Property names for git commits
The root of the tree for this commit
Author of the commit
Person that committed the change
Parent commit
Representation of a Git commit object
Properties for the commit
Commit messages
Constructor
Adds a new property to the collection
Gets a property value with the given key name
Name of the property
The property value
Gets property values with the given key name
Name of the property
The property value
The tree for this commit
Parents of this commit
Serializes this object
Type fields for git objects
Blob objects
Tree objects
Commit objects
Utility methods for manipulating Git objects
Writes the header for an object to a stream
Type of the object
Size of the object
Appends data for a header to the hash
Type of the object
Size of the object
Hash for the header data
Writes a header for an object
Type of the object
Size of the object
Buffer to receive the data
Valid file modes for tree entries
Regular file
Executable file
Child tree
Representation of a Git tree object
Entries for this tree. Sort be sorted for consistency.
Gets the hash of this tree object.
Hash of the object
Serializes this object
Entry for a Git tree
Mode for this entry. Can be any values from
Name of this entry
Hash of the object for this entry
Constructor
Factory for constructing storage namespace instances with an HTTP backend
Constructor
Creates a new HTTP storage namespace
Base path for all requests
Custom access token to use for requests
Creates a new HTTP storage namespace
Namespace to create a client for
Custom access token to use for requests
Whether to enable the backend cache, which caches full bundles to disk
Interface for reading nodes from storage
Type to deserialize
Version of the current node, as specified via
Locations of all referenced nodes. These handles do not have valid hashes.
Gets the next serialized blob handle
Gets the next serialized blob handle
Reader for blob objects
Type to deserialize
Version of the current node, as specified via
Total length of the data in this node
Amount of data remaining to be read
Raw data for this blob
Locations of all referenced nodes.
Constructor
Gets the next serialized blob reference
Gets the next serialized blob reference
Reference to another node in storage. This type is similar to , but without a hash.
Accessor for the innermost import
Flush the referenced data to underlying storage
Reads the blob's data
Cancellation token for the operation
Attempt to get a path for this blob.
Receives the blob path on success.
True if a path was available, false if the blob has not yet been flushed to storage.
Typed interface to a particular blob handle
Type of the deserialized blob
Options for deserializing the blob
Extension methods for
Create a typed blob reference
Target type
Blob referfence to wrap
Options for deserializing the blob
Gets a path to this blob that can be used to describe blob references over the wire.
Handle to query
Interface for a writer of node objects
Options for serialization
Accessor for the memory written to the current blob
Adds an alias to the blob currently being written
Name of the alias
Rank to use when finding blobs by alias
Inline data to store with the alias
Flush any pending nodes to storage
Cancellation token for the operation
Create another writer instance, allowing multiple threads to write in parallel.
New writer instance
Finish writing a blob that has been written into the output buffer.
Type of the node that was written
Cancellation token for the operation
Handle to the written node
Finish writing a blob that has been written into the output buffer.
Type of the node that was written
Cancellation token for the operation
Handle to the written node
Writes a reference to another blob. NOTE: This does not write anything to the underlying output stream, which prevents the data forming a Merkle tree
unless guaranteed uniqueness via a hash being written separately.
Referenced blob
Writes a reference to another blob. The blob's hash is serialized to the output stream.
Referenced blob
Information about an alias to be added alongside a blob
Name of the alias
Rank of the alias
Inline data to be stored for the alias
Information about an alias to be added alongside a blob
Name of the alias
Rank of the alias
Inline data to be stored for the alias
Name of the alias
Rank of the alias
Inline data to be stored for the alias
Base class for implementations.
Constructor
Computes the hash of the written data
Writes a handle to another node
Request a new buffer to write to
Size of data written to the current buffer
Desired size for the buffer
New buffer
Write the current blob to storage
Type of the blob
Size of the blob to write
References to other blobs
Aliases for the new blob
Cancellation token for the operation
New buffer
Implementation of which discards any written data
Constructor
Implementation of which just buffers data in memory
Constructor
Clears the contents of this writer
Helper function to get the index of a blob
Handle to a node. Can be used to reference nodes that have not been flushed yet.
Hash of the target node
Typed interface to a particular blob handle
Type of the deserialized blob
Contains the value for a blob ref
Contains the value for a blob ref
Helper methods for creating blob handles
Create an untyped blob handle
Imported blob interface
Hash of the blob
Handle to the blob
Create a typed blob handle
Existing blob reference
Options for deserializing the target blob
Handle to the blob
Create a typed blob handle
Hash of the blob
Imported blob interface
Options for deserializing the target blob
Handle to the blob
Extension methods for
Gets a BlobRefValue from an IBlobRef
Interface for a object storage service.
Whether this storage backend supports HTTP redirects for reads and writes
Attempts to open a read stream for the given path.
Relative path within the bucket
Offset to start reading from
Length of data to read
Cancellation token for the operation
Reads an object into memory and returns a handle to it.
Path to the file
Offset of the data to retrieve
Length of the data
Cancellation token for the operation
Handle to the data. Must be disposed by the caller.
Writes a stream to the storage backend. If the stream throws an exception during read, the write will be aborted.
Path to write to
Stream to write
Cancellation token for the operation
Path to the uploaded object
Tests whether the given path exists
Relative path within the bucket
Cancellation token for the operation
True if the object exists
Gets the size of a particular object
Relative path within the bucket
Cancellation token for the operation
Size of the object, or -1 if it does not exist
Deletes a file with the given path
Relative path within the bucket
Cancellation token for the operation
Async task
Gets a HTTP redirect for a read request
Path to read from
Cancellation token for the operation
Path to upload the data to
Gets a HTTP redirect for a write request
Path to read from
Cancellation token for the operation
Path for retrieval, and URI to upload the data to
Gets stats for this storage backend
Typed instance for dependency injection
Extension methods for
Creates a typed wrapper around the given object store
Store to wrap
Attempts to open a read stream for the given path.
Store to read from
Relative path within the bucket
Cancellation token for the operation
Reads an object into memory and returns a handle to it.
Store to read from
Path to the file
Cancellation token for the operation
Handle to the data. Must be disposed by the caller.
Writes a stream to the storage backend. If the stream throws an exception during read, the write will be aborted.
Store to write to
Path to write to
Data to write
Cancellation token for the operation
Path to the uploaded object
Interface for a low-level storage backend.
Whether this storage backend supports HTTP redirects for reads and writes
Attempts to open a read stream for the given path.
Relative path within the bucket
Offset to start reading from
Length of data to read
Cancellation token for the operation
Reads an object into memory and returns a handle to it.
Path to the file
Offset of the data to retrieve
Length of the data
Cancellation token for the operation
Handle to the data. Must be disposed by the caller.
Writes a stream to the storage backend. If the stream throws an exception during read, the write will be aborted.
Data stream
List of referenced blobs
Path prefix for the uploaded data
Cancellation token for the operation
Path to the uploaded object
Writes a stream to the storage backend. If the stream throws an exception during read, the write will be aborted.
s
Locator for the new blob
Data stream
Imported blobs. If omitted, the backend will parse them from the stream data.
Cancellation token for the operation
Path to the uploaded object
Gets a HTTP redirect for a read request
Path to read from
Cancellation token for the operation
Path to upload the data to
Gets a HTTP redirect for a write request
Path to write to
Imports for this blob
Cancellation token for the operation
Path for retrieval, and URI to upload the data to
Gets a HTTP redirect for a write request
Imports for this blob
Prefix for the uploaded data
Cancellation token for the operation
Path for retrieval, and URI to upload the data to
Finds blobs with the given alias. Unlike refs, aliases do not serve as GC roots.
Alias for the blob
Maximum number of aliases to return
Cancellation token for the operation
Blobs matching the given handle
Reads data for a ref from the store
The ref name
Minimum coherency for any cached value to be returned
Cancellation token for the operation
Blob pointed to by the ref
Batch request to update metadata
Options for the update
Cancellation token for the operation
Gets stats for this storage backend
Utility methods for storage backend implementations
Unique session id used for unique ids
Incremented value used for each supplied id
Creates a unique name with a given prefix
The prefix to use
Unique name generated with the given prefix
Extension methods for
Attempts to open a read stream for the given path.
Backend to read from
Object name within the store
Cancellation token for the operation
Stream for the object
Attempts to open a read stream for the given path.
Backend to read from
Object name within the store
Cancellation token for the operation
Stream for the object
Reads an object as an array of bytes
Backend to read from
Object name within the store
Cancellation token for the operation
Contents of the object
Writes a block of memory to storage
Backend to read from
Data to be written
Prefix for the uploaded data
Cancellation token for the operation
Client for the storage system
Creates a storage namespace for the given id
Namespace to manipulate
Storage namespace instance. May be null if the namespace does not exist.
Extension methods for
Creates a new storage namespace, throwing an exception if it does not exist
Options for a new ref
Time until a ref is expired
Whether to extend the remaining lifetime of a ref whenever it is fetched. Defaults to true.
Options for a new ref
Time until a ref is expired
Whether to extend the remaining lifetime of a ref whenever it is fetched. Defaults to true.
Time until a ref is expired
Whether to extend the remaining lifetime of a ref whenever it is fetched. Defaults to true.
Interface for the storage system.
Creates a writer for updating the namespace
Cancellation token for async writing. The writer will flush on dispose unless this cancellation token is signalled.
New writer instance
Creates a new blob handle by parsing a locator
Path to the blob
New handle to the blob
Creates a new writer for storage blobs
Base path for any nodes written from the writer.
Options for serializing classes
Cancellation token used for any buffered blob writes. Blob writers will flush on close, unless this cancellation token is signalled.
New writer instance. Must be disposed after use.
Finds blobs with the given alias. Unlike refs, aliases do not serve as GC roots.
Alias for the blob
Maximum number of aliases to return
Cancellation token for the operation
Blobs matching the given handle
Reads data for a ref from the store
The ref name
Minimum coherency for any cached value to be returned
Cancellation token for the operation
Blob pointed to by the ref
Gets a snapshot of the stats for the storage namespace.
Indicates the maximum age of a entry returned from a cache in the hierarchy
Oldest allowed timestamp for a returned result
Indicates the maximum age of a entry returned from a cache in the hierarchy
Oldest allowed timestamp for a returned result
Oldest allowed timestamp for a returned result
Maximum age for a cached value to be returned
Sets the earliest time at which the entry must have been valid
Maximum age of any returned cache value. Taken from the moment that this object was created.
Tests whether this value is set
Determines if this cache time deems a particular cache entry stale
Time at which the cache entry was valid
Maximum cache time to test against
Implicit conversion operator from datetime values.
Implicit conversion operator from timespan values.
Stats for the storage system
Stat name to value
Add a new stat to the list
Prints the table of stats to the logger
Subtract a base set of stats from this one
Extension methods for
Creates a new blob handle by parsing a locator
Storage namespace to operate on
Path to the blob
Options for deserializing the blob
New handle to the blob
Creates a new blob reference from a locator and hash
Storage namespace to operate on
Hash of the target blob
Path to the blob
New handle to the blob
Creates a new blob reference from a locator and hash
Storage namespace to operate on
Hash of the target blob
Path to the blob
Options for deserializing the blob
New handle to the blob
Create a blob ref from a RefValue
Create a typed blob ref from a RefValue
Creates a new blob ref from a ref name
The store instance to read from
Name of the reference
Maximum age for cached responses
New handle to the blob
Creates a new blob ref from a ref name
The store instance to read from
Name of the reference
Maximum age for cached responses
Options for deserializing the blob
New handle to the blob
Creates a writer using a refname as a base path
The store instance to read from
Ref name to use as a base path
Adds an alias to a given blob
The store instance to write to
Alias for the blob
Locator for the blob
Rank for this alias. In situations where an alias has multiple mappings, the alias with the highest rank will be returned by default.
Additional data to be stored inline with the alias
Cancellation token for the operation
Removes an alias from a blob
The store instance to write to
Name of the alias
Locator for the blob
Cancellation token for the operation
Finds blobs with the given alias. Unlike refs, aliases do not serve as GC roots.
The store instance to read from
Alias for the blob
Cancellation token for the operation
Blobs matching the given handle
Checks if the given ref exists
The store instance to read from
Name of the reference to look for
Minimum coherency for any cached value to be returned
Cancellation token for the operation
True if the ref exists, false if it did not exist
Reads a ref from the store, throwing an exception if it does not exist
The store instance to read from
Id for the ref
Minimum coherency of any cached result
Cancellation token for the operation
The ref target
Writes a new ref to the store
The store instance to write to
Ref to write
Handle to the target blob
Options for the new ref
Cancellation token for the operation
Unique identifier for the blob
Reads data for a ref from the store
The store instance to write to
The ref identifier
Cancellation token for the operation
Gets a snapshot of the stats for the storage namespace.
Interface for a batching storage writer
Creates a new writer for storage blobs
Base path for any nodes written from the writer.
Options for serializing classes
New writer instance. Must be disposed after use.
Adds an alias to a given blob
Alias for the blob
Locator for the blob
Rank for this alias. In situations where an alias has multiple mappings, the alias with the highest rank will be returned by default.
Additional data to be stored inline with the alias
Cancellation token for the operation
Removes an alias from a blob
Name of the alias
Locator for the blob
Cancellation token for the operation
Adds a new ref to the store
Ref to write
Handle to the target blob
Options for the new ref
Cancellation token for the operation
Unique identifier for the blob
Removes a ref from the store
The ref identifier
Cancellation token for the operation
Flush any buffered writes
Cancellation token for the operation
Default base implementation of which combines metadata updates into a
Constructor
Base class for storage namespaces that wrap a diirect key/value type store without any merging/splitting.
Constructor
Constructor
Constructor
Create an in-memory storage namespace
Identifier for a storage namespace
The text representing this id
Constructor
Unique id for the namespace
Constructor
Unique id for the namespace
Type converter for NamespaceId to and from JSON
Type converter from strings to NamespaceId objects
A node containing arbitrary compact binary data
Static accessor for the blob type guid
The compact binary object
Imported nodes
Constructor
The compact binary object
List of imports for attachments
Representation of a data stream, split into chunks along content-aware boundaries using a rolling hash ().
Chunks are pushed into a tree hierarchy as data is appended to the root, with nodes of the tree also split along content-aware boundaries with granularity.
Once a chunk has been written to storage, it is treated as immutable.
Copies the contents of this node and its children to the given output stream
The output stream to receive the data
Options for controling serialization
Cancellation token for the operation
Copy the contents of the node to the output stream without creating the intermediate FileNodes
Handle to the data to read
The output stream to receive the data
Cancellation token for the operation
Extracts the contents of this node to a file
Handle to the data to read
File to write with the contents of this node
Cancellation token for the operation
Serialize this node and its children into a byte array
Options to control serialization
Array of data stored by the tree
Type of a chunked data node
Unknown node type
Leaf node
An interior node
Reference to a chunked data node
Type of the referenced node
Length of the data stream within this node
Rolling hash for this chunk. Only serialized for leaf node references.
Handle to the target node
Reference to a chunked data node
Type of the referenced node
Length of the data stream within this node
Rolling hash for this chunk. Only serialized for leaf node references.
Handle to the target node
Type of the referenced node
Length of the data stream within this node
Rolling hash for this chunk. Only serialized for leaf node references.
Handle to the target node
Leaf node constructor
Interior node constructor
Gets the target interior node handle
Gets the target leaf node handle
Read the node which is the target of this ref
Enumerates all the leaf nodes for this node
Cancellation token for the operation
Reference to a
Length of the leaf data
Reference to the blob
Reference to a
Length of the leaf data
Reference to the blob
Length of the leaf data
Reference to the blob
Stores the flat list of chunks produced from chunking a single data stream
Hash of the data
Handles to the leaf chunks
Stores the flat list of chunks produced from chunking a single data stream
Hash of the data
Handles to the leaf chunks
Hash of the data
Handles to the leaf chunks
File node that contains a chunk of data
Guid for the blob type
Data for this node
Create an empty leaf node
Copy the contents of the node to the output stream without creating the intermediate FileNodes
The raw node data
The output stream to receive the data
Cancellation token for the operation
Creates nodes from the given file
Writer for output nodes
File info
Options for finding chunk boundaries
Cancellation token for the operation
Hash of the full file data
Creates nodes from the given file
Writer for output nodes
File info
Options for finding chunk boundaries
Cancellation token for the operation
Hash of the full file data
Creates nodes from the given file
Writer for output nodes
Stream to read from
Options for finding chunk boundaries
Cancellation token for the operation
Hash of the full file data
Creates nodes from the given file
Writer for output nodes
Stream to read from
Options for finding chunk boundaries
Stats for the copy operation
Cancellation token for the operation
Hash of the full file data
Determines how much data to append to an existing leaf node
Data to be appended
Options for chunking the data
Receives the rolling hash at the end of this chunk
The number of bytes to append
Static accessor for the blob type
Options for creating interior nodes
Minimum number of children in each node
Target number of children in each node
Maximum number of children in each node
Threshold hash value for splitting interior nodes
Options for creating interior nodes
Minimum number of children in each node
Target number of children in each node
Maximum number of children in each node
Threshold hash value for splitting interior nodes
Minimum number of children in each node
Target number of children in each node
Maximum number of children in each node
Threshold hash value for splitting interior nodes
Default settings
Constructor
An interior file node
Static accessor for the blob type guid
Child nodes
Constructor
Create a tree of nodes from the given list of handles, splitting nodes in each layer based on the hash of the last node.
List of leaf handles
Options for splitting the tree
Output writer for new interior nodes
Cancellation token for the operation
Handle to the root node of the tree
Create a tree of nodes from the given list of handles, splitting nodes in each layer based on the hash of the last node.
List of leaf nodes
Options for splitting the tree
Output writer for new interior nodes
Cancellation token for the operation
Handle to the root node of the tree
Split a list of leaf handles into a layer of interior nodes
Copy the contents of the node to the output stream without creating the intermediate FileNodes
Source data
The output stream to receive the data
Cancellation token for the operation
Converter for interior node types
Constructor
Constructor
Version number for serialized data
Gets the blob type for a particular Horde api version
Options for creating file nodes
Options for creating leaf nodes
Options for creating interior nodes
Constructor
Options for creating a specific type of file nodes
Minimum chunk size
Maximum chunk size. Chunks will be split on this boundary if another match is not found.
Target chunk size for content-slicing
Options for creating a specific type of file nodes
Minimum chunk size
Maximum chunk size. Chunks will be split on this boundary if another match is not found.
Target chunk size for content-slicing
Minimum chunk size
Maximum chunk size. Chunks will be split on this boundary if another match is not found.
Target chunk size for content-slicing
Default settings
Window size to use when scanning for split points
Accessor for the BuzHash chunking threshold
Constructor
Fixed size chunks to use
Utility class for generating FileNode data directly into instances, without constructing node representations first.
Default buffer length when calling CreateAsync/AppendAsync
Length of the file so far
Constructor
Writer for new nodes
Constructor
Writer for new nodes
Chunking options
Reset the current state
Resets the state of the current leaf node
Creates data for the given file
File to append
Cancellation token for the operation
Creates data for the given file
File to append
Size of the read buffer
Cancellation token for the operation
Creates data from the given stream
Stream to append
Cancellation token for the operation
Creates data from the given stream
Stream to append
Size of the read buffer
Cancellation token for the operation
Creates data from the given data
Stream to append
Cancellation token for the operation
Appends data to the current file
Stream containing data to append
Cancellation token for the operation
Appends data to the current file
Stream containing data to append
Size of the read buffer
Cancellation token for the operation
Appends data to the current file
Data to append
Cancellation token for the operation
Determines how much data to append to an existing leaf node
Current data in the leaf node
Data to be appended
Current BuzHash of the data
Options for chunking the data
The number of bytes to append
Complete the current file, and write all open nodes to the underlying writer
Cancellation token for the operation
Handle to the root node
Writes the contents of the current leaf node to storage
Cancellation token for the operation
Handle to the written leaf node
Describes a chunked data stream
Hash of the stream as a contiguous buffer
Handle to the root chunk containing the data
Describes a chunked data stream
Hash of the stream as a contiguous buffer
Handle to the root chunk containing the data
Hash of the stream as a contiguous buffer
Handle to the root chunk containing the data
Writes chunked data to an output writer
Length of the current stream
Constructor
Reset the current state
Creates data for the given file
File to append
Cancellation token for the operation
Creates data from the given data
Stream to append
Cancellation token for the operation
Appends data to the current file
Data to append
Cancellation token for the operation
Complete the current file, and write all open nodes to the underlying writer
Cancellation token for the operation
Handle to the root node
Complete the current file, and write all open nodes to the underlying writer
Cancellation token for the operation
Handle to the root node
A node representing commit metadata
Static accessor for the blob type guid
The commit number
Reference to the parent commit
Human readable name of the author of this change
Optional unique identifier for the author. May be an email address, user id, etc...
Human readable name of the committer of this change
Optional unique identifier for the committer. May be an email address, user id, etc...
Message for this commit
Time that this commit was created
Contents of the tree at this commit
Metadata for this commit, keyed by arbitrary GUID
Constructor
A node containing ref data
Static accessor for the blob type guid
Hash of the root node
References to attachments. We embed this in the ref node to ensure any aliased blobs have a hard reference from the root.
Constructor
Constructor
Flags for a directory node
No flags specified
A directory node
Type of serialized directory node blobs
Total size of this directory
Flags for this directory
All the files within this directory
Map of name to file entry
All the subdirectories within this directory
Map of name to file entry
Constructor
Clear the contents of this directory
Check whether an entry with the given name exists in this directory
Name of the entry to search for
True if the entry exists
Adds a new file entry to this directory
The entry to add
Adds a new file with the given name
Name of the new directory
Flags for the new file
Length of the file
Chunked data for the file
The new directory object
Attempts to get a file entry with the given name
Name of the file
Entry for the given name
Attempts to get a file entry with the given name
Name of the file
Entry for the file
True if the file was found
Opens a file for reading
Name of the file to open
Stream for the file
Attempts to open a file for reading
Name of the file
File stream, or null if the file does not exist
Deletes the file entry with the given name
Name of the entry to delete
True if the entry was found, false otherwise
Attempts to get a file entry from a path
Path to the directory
Cancellation token
The directory with the given path, or null if it was not found
Attempts to get a directory entry from a path
Path to the directory
Cancellation token
The directory with the given path, or null if it was not found
Deletes a file with the given path
Adds a new directory with the given name
Name of the new directory
Get a directory entry with the given name
Name of the directory
The entry with the given name
Attempts to get a directory entry with the given name
Name of the directory
Entry for the directory
True if the directory was found
Tries to get a directory with the given name
Name of the new directory
Cancellation token for the operation
The new directory object
Tries to get a directory with the given name
Name of the new directory
Cancellation token for the operation
The new directory object
Deletes the file entry with the given name
Name of the entry to delete
True if the entry was found, false otherwise
Constructor
Constructor
Reference to a directory node, including the target hash and length
Sum total of all the file lengths in this directory tree
Handle to the target node
Reference to a directory node, including the target hash and length
Sum total of all the file lengths in this directory tree
Handle to the target node
Sum total of all the file lengths in this directory tree
Handle to the target node
Entry for a directory within a directory node
Entry for a directory within a directory node
Stats reported for copy operations
Number of files that have been copied
Total size of data to be copied
Processing speed, in bytes per second
Total size of the data downloaded
Download speed, in bytes per second
Progress logger for writing copy stats
Whether to print out separate stats for download speed
Constructor
Constructor
Options for extracting data
Number of async tasks to spawn for reading
Default number of read tasks to use
Number of async tasks to spawn for decoding data
Default number of decode tasks to use
Number of async tasks to spawn for writing output
Default number of write tasks to use
Output for progress updates
Frequency that the progress object is updated
Whether to hash downloaded data to ensure it's correct
Output verbose logging about operations being performed
Extension methods for extracting data from directory nodes
Utility function to allow extracting a packed directory to disk
Directory to update
Utility function to allow extracting a packed directory to disk
Directory to extract
Direcotry to write to
Sink for progress updates
Logger for output
Cancellation token for the operation
Utility function to allow extracting a packed directory to disk
Directory to extract
Direcotry to write to
Sink for progress updates
Frequency for progress updates
Logger for output
Cancellation token for the operation
Utility function to allow extracting a packed directory to disk
Directory to extract
Direcotry to write to
Logger for output
Cancellation token for the operation
Utility function to allow extracting a packed directory to disk
Directory to extract
Direcotry to write to
Options for the download
Logger for output
Cancellation token for the operation
Utility function to allow extracting a packed directory to disk
Directory to extract
Direcotry to write to
Options for the download
Logger for output
Cancellation token for the operation
Stats reported for copy operations
Number of files that have been copied
Total size of data to be copied
Processing speed, in bytes per second
Reports progress info back to callers
Progress logger for writing copy stats
Constructor
Constructor
Describes an update to a file in a directory tree
Path to the file
Length of the file data
Last modified time for the file
Flags for the new file entry
Hash of the entire stream
Chunked data for the file
Describes an update to a file in a directory tree
Path to the file
Length of the file data
Last modified time for the file
Flags for the new file entry
Hash of the entire stream
Chunked data for the file
Path to the file
Flags for the new file entry
Length of the file data
Hash of the entire stream
Chunked data for the file
Last modified time for the file
Constructor
Constructor
Writes interior node data to the given writer
Describes an update to a directory node
Directories to be updated
Files to be updated
Reset this instance
Adds a file by path to this object
Path to add to
Content for the file
Adds a file to the tree
Path to the file
Flags for the new file entry
Length of the file
Last modified time for the file
Chunked data instance
Adds a filtered list of files from disk
Files to add
Writes interior node data to the given writer
Extension methods for directory node updates
Writes a tree of files to a storage writer
Writes a tree of files to a storage writer
Writes a tree of files to a storage writer
Updates this tree of directory objects
Directory to update
Files to add
Writer for new node data
Cancellation token for the operation
Updates this tree of directory objects
Directory to update
Files to add
Writer for new node data
Cancellation token for the operation
Adds files from a directory to the storage
Directory to add to
Base directory to base paths relative to
Files to add
Options for chunking file content
Writer for new node data
Feedback interface for progress updates
Cancellation token for the operation
Copies entries from a zip file
Directory to update
Input stream
Writer for new nodes
Cancellation token for the operation
Stream which zips a directory node tree dynamically
Constructor
Root node to copy from
Filter for files to include in the zip
Optional logger for debug tracing
Extension methods for
Returns a stream containing the zipped contents of this directory
The directory to zip
Filter for files to include in the zip
Logger for diagnostic output
Stream containing zipped archive data
Flags for a file entry
No other flags set
Indicates that the referenced file is executable
File should be stored as read-only
File contents are utf-8 encoded text. Client may want to replace line-endings with OS-specific format.
Used to indicate that custom data is included in the output. Used internally for serialization; not exposed to users.
File should be materialized as UTF-16 (but is stored as a UTF-8 source)
Whether the file entry includes a modification time
Entry for a file within a directory node
Name of this file
Flags for this file
Length of this entry
Hash of the file as a contiguous stream. This differs from individual node hashes which hash the Merkle tree of chunks forming it.
Reference to the chunked data for the file
Last modified time. Only valid if FileEntryFlags.HasModTime is set.
Custom user data for this file entry
Constructor
Constructor
Creates a stream that returns the contents of this file
The content stream
Copies the contents of this node and its children to the given output stream
The output stream to receive the data
Cancellation token for the operation
Extracts the contents of this node to a file
File to write with the contents of this node
Cancellation token for the operation
Get the permission flags from a file on disk
File to check
Permission flags for the given file
Applies the correct permissions to a file for a particular set of file entry flags
File to modify
Flags for the file
Stream which returns the content of a file
Constructor
The file entry to copy from
Shared definitions for
Static accessor for the blob type guid
A node containing arbitrary compact binary data
The target handle
Constructor
Target node for the redirect
Path to an object within an . Object keys are normalized to lowercase, and may consist of the characters [a-z0-9_./].
Dummy enum to allow invoking the constructor which takes a sanitized full path
Dummy value
Accessor for the internal path string
Constructor
Path to the blob. The meaning of this string is implementation defined.
Constructor
Path to the blob. The meaning of this string is implementation defined.
Constructor
Path to the blob. The meaning of this string is implementation defined.
Makes an object key from the given path
Whether the blob locator is valid
Storage backend that utilizes the local filesystem
Accessor for the base directory
Constructor
Base directory for the store
Cache for memory mapped files
Gets the path for storing a file on disk
Maps a file into memory for reading, and returns a handle to it
Path to the file
Offset of the data to retrieve
Length of the data
Handle to the data. Must be disposed by the caller.
Delete a file from the store
Storage backend that utilizes the local filesystem
Constructor
Create a new store instance with the given base directory
Implements an object store utilizing standard HTTP operations
Constructor
Base url for the store
Method for creating a new http client
In-memory implementation of a storage backend
Read only access to the stored blobs
Storage backend wrapper which adds a prefix to the start of each item
Constructor
Identifier for a ref in the storage system. Refs serve as GC roots, and are persistent entry points to expanding data structures within the store.
Empty ref name
String for the ref name
Constructor
Constructor
Validates a given string as a blob id
String to validate
Name of the argument
Construct a ref from a string
Name of the ref
Type converter for IoHash to and from JSON
Type converter from strings to IoHash objects
Type converter to compact binary
Implementation of a local disk cache which can be shared by multiple backends
Constructor
Constructor
Constructor
Directory to storage cache files. Will be cleared on startup. Defaults to a randomly generated directory in the users's temp folder.
Maximum size of the cache. Defaults to 50mb.
Logger for error/warning messages
Wraps an onject store in another store that routes requests through the cache
Prefix for items in this cache
Backend to wrap
Wraps a storage backend in another backend that routes requests through the cache
Prefix for items in this cache
Backend to wrap
Wraps a storage backend in another backend that routes requests through the cache
Prefix for items in this cache
Backend to wrap
The cache instance. May be null.
Get stats for the cache operation
Base exception for the storage service
Constructor
Constructor
Exception thrown when an object does not exist
Path to the object
Constructor
Constructor
Exception for a ref not existing
Name of the missing ref
Constructor
Response from uploading a bundle
Path to the uploaded blob
URL to upload the blob to.
Flag for whether the client could use a redirect instead (ie. not post content to the server, and get an upload url back).
Response object for finding an alias
Locator for the target blob
Rank of this alias
Inline data associated with this alias
Default constructor
Constructor
Response object for searching for nodes with a given alias
Hash of the target node
Request to batch update metadata in the database
List of aliases to add
List of aliases to remove
List of refs to add
List of refs to remove
Request object for adding an alias
Name of the alias
Rank for the new alias
Data to store with the ref
Path to the target blob
Request object for removing an alias
Name of the alias
Path to the target blob
Request object for writing a ref
Hash of the target blob
Path to the target blob
Locator for the target blob
Export index for the ref
Inline data associated with the ref
Options for the ref
Request object for removing a ref
Name of the ref
Request object for removing a ref
Name of the ref
Response object for reading a ref
Hash of the target node
The target blob
Link to information about the target node
Base path for this storage backend
Provides functionality to extract and patch data in a local workspace
Flag for the default layer
Flag for the cache layer
Flags for user layers
Root directory for the workspace
Layers current in this workspace
Constructor
Directory for the workspace
Path to the state file for this directory
Logger for diagnostic output
Create a new workspace instance in the given location. Opens the existing instance if it already contains workspace data.
Root directory for the workspace
Logger for output
Cancellation token for the operation
Workspace instance
Create a new workspace instance in the given location. Opens the existing instance if it already contains workspace data.
Root directory for the workspace
Logger for output
Cancellation token for the operation
Workspace instance
Attempts to open an existing workspace for the current directory.
Root directory for the workspace
Logger for output
Cancellation token for the operation
Workspace instance
Attempts to open an existing workspace for the current directory.
Root directory for the workspace
Logger for output
Cancellation token for the operation
Workspace instance
Save the current state of the workspace
Cancellation token for the operation
Add or update a layer with the given identifier
Identifier for the layer
Removes a layer with the given identifier. Does not remove any files in the workspace.
Layer to update
Syncs a layer to the given contents
Identifier for the layer
Base path within the workspace to sync to.
New contents for the layer
Cancellation token for the operation
Updates the status of files in this workspace based on current filesystem metadata
Cancellation token for the operation
Checks that all files within the workspace have the correct hash
Cancellation token for the operation
Identifier for a workspace layer
Identifier for the layer
Identifier for a workspace layer
Identifier for the layer
Identifier for the layer
Name of the default layer
Converter class to and from ObjectId values
Unique id for an artifact
Identifier for the artifact
Unique id for an artifact
Identifier for the artifact
Identifier for the artifact
Converter class to and from ObjectId values
Creates a new artifact
Name of the artifact
Additional search keys tagged on the artifact
Description for the artifact
Stream to create the artifact for
Keys used to identify the artifact
Metadata for the artifact
Creates a new artifact
Name of the artifact
Additional search keys tagged on the artifact
Description for the artifact
Stream to create the artifact for
Keys used to identify the artifact
Metadata for the artifact
Name of the artifact
Additional search keys tagged on the artifact
Description for the artifact
Stream to create the artifact for
Keys used to identify the artifact
Metadata for the artifact
Legacy Perforce changelist number.
Commit for the new artifact
Information about a created artifact
Identifier for the new artifact
Resolved commit id for the artifact
Namespace that should be written to with artifact data
Ref to write to
Ref for the artifact at the changelist prior to this one. Can be used to deduplicate against.
Token which can be used to upload blobs for the artifact, and read blobs from the previous artifact
Information about a created artifact
Identifier for the new artifact
Resolved commit id for the artifact
Namespace that should be written to with artifact data
Ref to write to
Ref for the artifact at the changelist prior to this one. Can be used to deduplicate against.
Token which can be used to upload blobs for the artifact, and read blobs from the previous artifact
Identifier for the new artifact
Resolved commit id for the artifact
Namespace that should be written to with artifact data
Ref to write to
Ref for the artifact at the changelist prior to this one. Can be used to deduplicate against.
Token which can be used to upload blobs for the artifact, and read blobs from the previous artifact
Type of data to download for an artifact
Download as a zip file
Download as a UGS link
Describes an artifact
Change number
Default constructor
Constructor
Result of an artifact search
List of artifacts matching the search criteria
Describes a file within an artifact
Name of this file
Length of this entry
Hash of the target node
Constructor
Describes a file within an artifact
Name of this file
Length of this entry
Hash of the target node
Constructor
Describes a directory within an artifact
Names of sub-directories
Files within the directory
Request to create a zip file with artifact data
Filter lines for the zip. Uses standard syntax.
Name of an artifact
The artifact type
Name of an artifact
The artifact type
The artifact type
Constructor
Identifier for the artifact type
Converter to and from instances.
Type of an artifact
The artifact type
Type of an artifact
The artifact type
The artifact type
Default artifact type
Output from a build step
Captured state from the machine executing a build step
Traces from the machine executing a build step
Test data generated by a machine executing a build step
Constructor
Identifier for the artifact type
Converter to and from instances.
Exception thrown to indicate that an artifact type does not exist
The stream containing the artifact
The missing artifact type
Constructor
Information about an artifact
Identifier for the Artifact. Randomly generated.
Name of the artifact
Type of artifact
Description for the artifact
Identifier for the stream that produced the artifact
Change that the artifact corresponds to
Keys used to collate artifacts
Metadata for the artifact
Storage namespace containing the data
Name of the ref containing the root data object
Time at which the artifact was created
Handle to the artifact data
Deletes this artifact
Cancellation token for the operation
Used to write data into an artifact
Creates a writer for new artifact blobs
Adds an alias to a given blob
Alias for the blob
Locator for the blob
Rank for this alias. In situations where an alias has multiple mappings, the alias with the highest rank will be returned by default.
Additional data to be stored inline with the alias
Cancellation token for the operation
Finish writing the artifact data
Root blob for the artifact
Cancellation token for the operation
The complete artifact
Interface for a collection of artifacts
Creates a new artifact
Name of the artifact
Type identifier for the artifact
Description for the artifact
Stream that the artifact was built from
Commit that the artifact was built from
Keys for the artifact
Metadata for the artifact
Cancellation token for the operation
The new log file document
Finds artifacts with the given keys.
Identifiers to return
Stream to find artifacts for
Minimum commit for the artifacts (inclusive)
Maximum commit for the artifacts (inclusive)
Name of the artifact to search for
The artifact type
Set of keys, all of which must all be present on any returned artifacts
Maximum number of results to return
Cancellation token for the operation
Sequence of artifacts. Ordered by descending CL order, then by descending order in which they were created.
Gets an artifact by ID
Unique id of the artifact
Cancellation token for the operation
The artifact document
Extension methods for artifacts
Finds artifacts with the given keys.
Collection to operate on
Stream to find artifacts for
Minimum commit for the artifacts (inclusive)
Maximum commit for the artifacts (inclusive)
Name of the artifact to search for
The artifact type
Set of keys, all of which must all be present on any returned artifacts
Maximum number of results to return
Cancellation token for the operation
Sequence of artifacts. Ordered by descending CL order, then by descending order in which they were created.
Request to return a set of blobs for Unsync
The strong hash algorithm
Files to retrieve
Files to retrieve
Requests a set of blobs from a particular file
Path to the file
Blobs to return
Requests a block of data
Hash of the block
Base class for configuring HTTP service clients
Base address for http requests
Exception thrown due to failed authorization
Constructor
Options for authenticating particular requests
Url of the auth server
Type of grant
Client id
Client secret
Scope of the token
Http message handler which adds an OAuth authorization header using a cached/periodically refreshed bearer token
Constructor
Updates the current access token
Factory for creating OAuth2AuthProvider instances from a set of options
Constructor
Create an instance of the auth provider
Options for authenticating particular requests
Bearer token for auth
Http message handler which adds an OAuth authorization header using a cached/periodically refreshed bearer token
Constructor
Normalized string identifier for a resource
Number of bytes in the identifier
Number of characters when formatted as a string
Constructor
Bytes to parse
Parse a binary id from a string
Parse a binary id from a string
Parse a binary id from a string
Attempt to parse a binary id from a string
Attempt to parse a binary id from a string
Attempt to parse a binary id from a string
Attempt to parse a binary id from a string
Checks whether this BinaryId is set
Format this id as a sequence of UTF8 characters
Format this id as a sequence of UTF8 characters
Format this id as a sequence of UTF8 characters
Format this id as a sequence of UTF8 characters
Compares two binary ids for equality
The first string id
Second string id
True if the two string ids are equal
Compares two binary ids for inequality
The first string id
Second string id
True if the two string ids are not equal
Class which serializes types
Class which serializes types
Base class for converting to and from types containing a . Useful pattern for reducing boilerplate with strongly typed records.
Converts a type to a
Constructs a type from a
Attribute declaring a for a particular type
The converter type
Constructor
Converter to compact binary objects
Class which serializes types with a to Json
Class which serializes types with a to Json
Creates constructors for types with a to Json
Identifier for a commit from an arbitary version control system.
Commit id with an empty name
Name of this commit. Compared as a case-insensitive string.
Constructor
Creates a commit it from a Perforce changelist number. Temporary helper method for migration purposes.
Creates a commit it from a Perforce changelist number. Temporary helper method for migration purposes.
Gets the Perforce changelist number. Temporary helper method for migration purposes.
Gets the Perforce changelist number. Temporary helper method for migration purposes.
Gets the Perforce changelist number. Temporary helper method for migration purposes.
Test two commits for equality
Test two commits for inequality
Variant of including a value that allows it to be used for ordering.
Commit id with an empty name
Value used for ordering commits.
Constructor
Creates a commit it from a Perforce changelist number. Temporary helper method for migration purposes.
Creates a commit it from a Perforce changelist number. Temporary helper method for migration purposes.
Information about a commit
The commit id
The changelist number
Name of the user that authored this change [DEPRECATED]
Information about the user that authored this change
The description text
Tags for this commit
List of files that were modified, relative to the stream base
Constructor
Constants for known commit tags
Predefined filter name for commits containing code
Predefined filter name for commits containing content
The tag text
Constructor
Check if the current tag is empty
Compares two string ids for equality
The first string id
Second string id
True if the two string ids are equal
Compares two string ids for inequality
The first string id
Second string id
True if the two string ids are not equal
Converts values to and from JSON
Stores metadata about a commit
Id for this commit
Stream containing the commit
The change that this commit originates from
The author user id
The owner of this change, if different from the author (due to Robomerge)
Changelist description
Base path for all files in the change
Date/time that change was committed
Gets the list of tags for the commit
Cancellation token for the operation
True if the commit has the given tag
Determine if this commit matches the given filter. Prefer using commit tags rather than this method; the results can be cached.
Filter to test
Cancellation token for the operation
Gets the files for this change, relative to the root of the stream
Minimum number of files to return. The response will include at least this number of files, unless the commit has fewer files.
Maximum number of files to return. Querying large number of files may cause performance issues with merge commits.
Cancellation token for the operation
List of files modified by this commit
Extension methods for operating on commits
Gets the files for this change, relative to the root of the stream
Commit to operate on
Number of files to return
Cancellation token for the operation
List of files modified by this commit
VCS abstraction. Provides information about commits to a particular stream.
Creates a new change
Path to modify in the change
Description of the change
Cancellation token for the operation
New commit information
Gets a commit by id
Commit to query
Cancellation token for the operation
Commit details
Gets an ordered commit id
The commit to query
Cancellation token for the operation
Numbered commit id
Finds changes submitted to a stream, in reverse order.
The minimum changelist number
Whether to include the minimum changelist in the range of enumerated responses
The maximum changelist number
Whether to include the maximum changelist in the range of enumerated responses
Maximum number of results to return
Tags for the commits to return
Cancellation token for the operation
Changelist information
Subscribes to changes from this commit source
Minimum changelist number (exclusive)
Tags for the commit to return
Cancellation token for the operation
New change information
Extension methods for
Creates a new change for a template
The Perforce service instance
The template being built
Cancellation token for the operation
New changelist number
Finds changes submitted to a stream, in reverse order.
Collection to operate on
The minimum changelist number
The maximum changelist number
Maximum number of results to return
Tags for the commits to return
Cancellation token for the operation
Changelist information
Gets the last code code equal or before the given change number
The commit source to query
Maximum code change to query
Cancellation token for the operation
The last code change
Finds the latest commit from a source
The commit source to query
Cancellation token for the operation
The latest commit
Finds the latest commit from a source
The commit source to query
Cancellation token for the operation
The latest commit
Exception thrown when a condition is not valid
Constructor
A conditional expression that can be evaluated against a particular object
The condition text
Error produced when parsing the condition
Determines if the condition is empty
True if the condition is empty
Checks if the condition has been parsed correctly
Parse the given text as a condition
Condition text to parse
The new condition object
Attempts to parse the given text as a condition
Condition to parse
The parsed condition. Does not validate whether the parse completed successfully; call to verify.
Evaluate the condition using the given callback to retreive property values
Implicit conversion from string to conditions
Converter from conditions to compact binary objects
Type converter from strings to condition objects
Type converter from Json strings to condition objects
Represents a remotely executed process managed by the Horde agent
Constructor
Type of a compute message
No message was received (end of stream)
No-op message sent to keep the connection alive. Remote should reply with the same message.
Sent in place of a regular response if an error occurs on the remote
Fork the message loop into a new channel
Sent as the first message on a channel to notify the remote that the remote end is attached
Extract files on the remote machine (Initiator -> Remote)
Notification that files have been extracted (Remote -> Initiator)
Deletes files on the remote machine (Initiator -> Remote)
Execute a process in a sandbox (Initiator -> Remote)
Execute a process in a sandbox (Initiator -> Remote)
Execute a process in a sandbox (Initiator -> Remote)
Returns output from the child process to the caller (Remote -> Initiator)
Returns the process exit code (Remote -> Initiator)
Reads a blob from storage
Response to a request.
Xor a block of data with a value
Result from an request.
Flags describing how to execute a compute task process on the agent
No execute flags set
Request execution to be wrapped under Wine when running on Linux.
Agent still reserves the right to refuse it (e.g no Wine executable configured, mismatching OS etc)
Use compute process executable as entrypoint for container
If not set, path to the executable is passed as the first parameter to the container invocation
Standard implementation of a message
Type of the message
Data that was read
Constructor
Exception thrown when an invalid message is received
Constructor
Exception thrown when a compute execution is cancelled
Constructor
Try constructing and throwing if the exception message matches a cancellation exception
Deserialized exception message
If message matches
Writer for compute messages
Sends the current message
Message for reporting an error
Message for reporting an error
Message requesting that the message loop be forked
New channel to communicate on
Size of the buffer
Message requesting that the message loop be forked
New channel to communicate on
Size of the buffer
New channel to communicate on
Size of the buffer
Extract files from a bundle to a path in the remote sandbox
Path to extract the files to
Locator for the tree to extract
Extract files from a bundle to a path in the remote sandbox
Path to extract the files to
Locator for the tree to extract
Path to extract the files to
Locator for the tree to extract
Deletes files or directories in the remote
Filter for files to delete
Deletes files or directories in the remote
Filter for files to delete
Filter for files to delete
Message to execute a new child process
Executable path
Arguments for the executable
Working directory to execute in
Environment variables for the child process. Null values unset variables.
Additional execution flags
URL to container image. If specified, process will be executed inside this container
Message to execute a new child process
Executable path
Arguments for the executable
Working directory to execute in
Environment variables for the child process. Null values unset variables.
Additional execution flags
URL to container image. If specified, process will be executed inside this container
Executable path
Arguments for the executable
Working directory to execute in
Environment variables for the child process. Null values unset variables.
Additional execution flags
URL to container image. If specified, process will be executed inside this container
Response from executing a child process
Exit code for the process
Response from executing a child process
Exit code for the process
Exit code for the process
Creates a blob read request
Creates a blob read request
Message for running an XOR command
Data to xor
Value to XOR with
Message for running an XOR command
Data to xor
Value to XOR with
Data to xor
Value to XOR with
Wraps various requests across compute channels
Closes the remote message loop
Sends a ping message to the remote
Sends an exception response to the remote
Sends an exception response to the remote
Parses a message as an
Requests that the remote message loop be forked
Parses a fork request message
Notifies the remote that a buffer has been attached
Waits until an attached notification is received along the channel
Throw an exception if message is not of expected type
Agent message to extend
Optional type to expect. If not specified, assume type was unwanted no matter what
Creates a sandbox on the remote machine
Parses a message as a
Destroys a sandbox on the remote machine
Current channel
Paths of files or directories to delete
Cancellation token for the operation
Parses a message as a
Executes a remote process (using ExecuteV1)
Current channel
Executable to run, relative to the sandbox root
Arguments for the child process
Working directory for the process
Environment variables for the child process
Cancellation token for the operation
Executes a remote process (using ExecuteV2)
Current channel
Executable to run, relative to the sandbox root
Arguments for the child process
Working directory for the process
Environment variables for the child process
Additional execution flags
Cancellation token for the operation
Executes a remote process (using ExecuteV3)
Current channel
Executable to run, relative to the sandbox root
Arguments for the child process
Working directory for the process
Environment variables for the child process
Additional execution flags
Optional container image URL. If set, execution will happen inside this container
Cancellation token for the operation
Parses a message as a
Parses a message as a
Parses a message as a
Sends output from a child process
Sends a response from executing a child process
Exit code from the process
Cancellation token for the operation
Parses a message as a
Wraps a compute message containing blob data
Reads a blob from the remote
Channel to write to
Path for the blob
Offset within the blob
Length of data to return
Cancellation token for the operation
Stream containing the blob data
Writes blob data to a compute channel
Channel to write to
The read request
Storage backend to retrieve the blob from
Cancellation token for the operation
Writes blob data to a compute channel
Channel to write to
Locator for the blob to send
Starting offset of the data
Length of the data
Storage backend to retrieve the blob from
Cancellation token for the operation
Send a message to request that a byte string be xor'ed with a particular value
Parse a message as an XOR request
Implementation of a compute channel
Allows creating new messages in rented memory
The negotiated compute protocol version number
Constructor
Protocol version number
Logger for diagnostic output
Constructor
Logger for diagnostic output
Overridable dispose method
Mark the send buffer as complete
Extension methods to allow creating channels from leases
Creates a message channel with the given identifier
Socket to create a channel for
Identifier for the channel
Creates a message channel with the given identifier
Socket to create a channel for
Identifier for the channel
Size of the send and receive buffer
Creates a message channel with the given identifier
Socket to create a channel for
Identifier for the channel
Size of the send buffer
Size of the recieve buffer
Reads a message from the channel
Channel to receive on
Expected type of the message
Cancellation token for the operation
Data for a message that was read. Must be disposed.
Creates a new builder for a message
Channel to send on
Type of the message
Cancellation token for the operation
New builder for messages
Forwards an existing message across a channel
Channel to send on
The message to be sent
Cancellation token for the operation
Implements the remote end of a compute worker.
Constructor
Directory to use for reading/writing files
Environment variables to set for any child processes
Whether to execute any external assemblies in the current process
Path to Wine executable. If null, execution under Wine is disabled
Path to container engine executable, e.g /usr/bin/podman. If null, execution inside a container is disabled
Logger for diagnostics
Runs the worker using commands sent along the given socket
Socket to read from
Cancellation token for the operation
Flattens and merges available env vars to be used for compute process execution
Optional extra env vars
Merged environment variables
Get user identity (Linux only)
Real user ID of the calling process
Get group identity (Linux only)
Real group ID of the calling process
Storage backend which can read bundles over a compute channel
Constructor
In-process buffer used to store compute messages
Constructor
Total capacity of the buffer
Constructor
Number of chunks in the buffer
Length of each chunk
Number of readers for this buffer
Core implementation of
Core implementation of
Constructor
Create a new shared memory buffer
Name of the buffer
Capacity of the buffer
Create a new shared memory buffer
Name of the buffer
Number of chunks in the buffer
Length of each chunk
Open an existing buffer by name
Name of the buffer to open
Core implementation of
Create a new shared memory buffer
Name of the buffer
Number of chunks in the buffer
Length of each chunk
Open an existing buffer by name
Name of the buffer to open
Runs a local Horde Agent process to process compute requests without communicating with a server
Constructor
Path to the Horde Agent assembly
Loopback port to connect on
Factory for logger instances
Implementation of which marshals data over a loopback connection to a method running on a background task in the same process.
Constructor
Port to connect on
Sandbox directory for the worker
Whether to run external assemblies in-process. Useful for debugging.
Logger for diagnostic output
Sets up the loopback listener and calls the server method
Handshake request message for tunneling server
Target host to relay traffic to/from
Target port
Handshake request message for tunneling server
Target host to relay traffic to/from
Target port
Target host to relay traffic to/from
Target port
Serialize the message
A string based representation
Deserialize the message
A raw string to deserialize
A request message
Handshake response message for tunneling server
Whether successful or not
Message with additional information describing the outcome
Handshake response message for tunneling server
Whether successful or not
Message with additional information describing the outcome
Whether successful or not
Message with additional information describing the outcome
Serialize the message
A string based representation
Deserialize the message
A raw string to deserialize
A request message
Exception for ServerComputeClient
Helper class to enlist remote resources to perform compute-intensive tasks.
Length of the nonce sent as part of handshaking between initiator and remote
Constructor
Factory for constructing http client instances
Logger for diagnostic messages
Constructor
Factory for constructing http client instances
Arbitrary ID used for identifying this compute client. If not provided, a random one will be generated
Logger for diagnostic messages
Exception indicating that no matching compute agents were found
The compute cluster requested
Requested agent requirements
Constructor
Identifier for a compute cluster
The text representing this id
Constructor
Unique id for the string
Compact binary converter for ClusterId
Type converter for ClusterId to and from JSON
Type converter from strings to ClusterId objects
In-process buffer used to store compute messages
Maximum number of chunks in a buffer
Maximum number of readers
Constructor
Resources shared between instances of the buffer
Overridable dispose method
Creates a new reader for this buffer
Writer for this buffer
Creates a new reference to the underlying buffer. The underlying resources will only be destroyed once all instances are disposed of.
Read interface for a compute buffer
Create a new reader instance using the same underlying buffer
Detaches this reader from the underlying buffer
Whether this buffer is complete (no more data will be added)
Updates the read position
Size of data that was read
Gets the next data to read
Memory to read from
Read from a buffer into another buffer
Memory to receive the read data
Cancellation token for the operation
Number of bytes read
Wait for data to be available, or for the buffer to be marked as complete
Minimum amount of data to read
Cancellation token for the operation
True if new data is available, false if the buffer is complete
Buffer that can receive data from a remote machine.
Create a new writer instance using the same underlying buffer
Gets memory to write to
Memory to be written to
Mark the output to this buffer as complete
Whether the writer was marked as complete. False if the writer has already been marked as complete.
Writes data into a buffer from a memory block
The data to write
Cancellation token for the operation
Gets memory to write to
Minimum size of the desired write buffer
Cancellation token for the operation
Memory to be written to
State shared between buffer instances
Write state for a chunk
Writer has moved to the next chunk
Chunk is still being appended to
This chunk marks the end of the stream
Stores the state of a chunk in a 64-bit value, which can be updated atomically
Stores the state of a chunk in a 64-bit value, which can be updated atomically
Wraps a pointer to the state of a chunk
State of a reader
State of a reader
Wraps a pointer to the state of a writer
State of the writer
State of the writer
Wraps a pointer to the state of a writer
Tracked state of the buffer
Constructor
Increment the reference count on this object
Decrement the reference count on this object, and dispose of it once it reaches zero
Overridable dispose method
Signals a read event
Signals read events for every reader
Waits for a read event to be signalled
Signals the write event
Waits for the write event to be signalled
Allocate a new reader
Conventional TCP-like interface for writing data to a socket. Sends are "push", receives are "pull".
Reader for the channel
Writer for the channel
Constructor
Sends data to a remote channel
Memory to write
Cancellation token for the operation
Marks a channel as complete
Buffer to receive the data
Cancellation token for the operation
Reads a complete message from the given socket, retrying reads until the buffer is full.
Buffer to store the data
Cancellation token for the operation
Reads either a full message or end of stream from the channel
Buffer to store the data
Cancellation token for the operation
Mark the channel as complete (ie. that no more data will be sent)
Generic class for compute errors
Exception thrown for internal reasons
Constructor
Exception thrown on a remote machine
Constructor
Constructor
Describes port used by a compute task
Built-in port used for agent and compute task communication
Externally visible port that is mapped to agent port
In direct connection mode, these two are identical.
Port the local process on the agent is listening on
Constructor
Request a machine to execute compute requests
Desired protocol version for the client
Condition to identify machines that can execute the request
Arbitrary ID to correlate the same request over multiple calls.
It's recommended to pick something globally unique, such as a UUID.
Details for making an agent connection
Request details for making an agent connection
Public IP of client requesting a compute resource (initiator)
As communication between client and Horde server may be on an internal network,
the client is responsible for resolving and providing this information.
TCP/IP ports the compute resource will listen to.
Key = arbitrary name identifying the port
Value = actual port number
Relay connection mode uses this information to set up port forwarding.
Type of connection mode that is preferred by the client. Server can still override.
Prefer connecting to agent over a public IP even if a more optimal route is available. Server can still override.
This is useful to avoid sending traffic over VPN tunnels.
Encryption mode to request. Server can still override.
Maximum duration (in milliseconds) that communication can be inactive between
the compute client and task-running agent before the connection is terminated.
Response to a cluster lookup request
Compute cluster ID
Response to compute allocation request
IP address of the remote agent machine running the compute task
How to establish a connection to the remote machine
An optional address (host:port) to use when connecting to agent via tunnel or relay mode
Port number on the remote machine
Assigned ports (externally visible port -> local port on agent)
Key is an arbitrary name identifying the port (same as was given in )
When relay mode is used, ports can mapped to a different externally visible port.
If compute task uses and listens to port 7000, that port can be externally represented as something else.
For example, port 32743 can be pointed to port 7000.
This makes no difference for the compute task process, but the client/initiator making connections must
pay attention to this mapping.
Encryption used
Cryptographic nonce to identify the request, as a hex string
AES key for the channel, as a hex string
X.509 certificate used for SSL/TLS encryption
Which cluster this remote machine belongs to
Identifier for the remote machine
Agent version for the remote machine
Identifier for the new lease on the remote machine
Resources assigned to this machine
Version number for the compute protocol
Properties of the agent assigned to do the work
Describe how to connect to the remote machine
Connection is established directly to remote machine, behaving like a normal TCP/UDP connection
Connection is tunneled through Horde server.
When connecting, initiator must send a tunnel handshake request indicating which machine/IP to tunnel to.
Once handshake is complete, TCP connection behaves as normal (UDP not supported)
Connection is established to remote machine via a relay.
Forwarding is transparent and behaves like a normal TCP/UDP connection.
Describe encryption for the compute resource connection
No encryption enabled
Use custom AES-based encryption transport
Use SSL/TLS encryption with RSA 2048-bits
Use SSL/TLS encryption with ECDSA P-256
Resource needs declaration request
Unique session ID performing compute resource requests
Pool of agents requesting resources from
Key/value of resources needed by session (such as CPU or memory, see KnownPropertyNames in Horde.Server)
Resource needs response
List of resource needs
Version number for the compute protocol
No version specified
Initial version number
Set new env vars UE_HORDE_CPU_COUNT and UE_HORDE_CPU_MULTIPLIER
Constant for the latest protocol version
Helper methods for compute protocol version numbers
Gets the appropriate bundle options for a compute protocol version number
Socket for sending and reciving data using a "push" model. The application can attach multiple writers to accept received data.
The current protocol number
Logger for diagnostic messages
Attaches a buffer to receive data.
Channel to receive data on
Writer for the buffer to store received data
Attaches a buffer to send data.
Channel to receive data on
Reader for the buffer to send data from
Creates a channel using a socket and receive buffer
Channel id to send and receive data
Creates a channel using a socket and receive buffer
Channel id to send and receive data
Buffer for receiving data
Buffer for sending data
Provides functionality for attaching buffers for compute workers
Name of the environment variable for passing the name of the compute channel
Creates a socket for a worker
Opens a socket which allows a worker to communicate with the Horde Agent
Opens a socket which allows a worker to communicate with the Horde Agent
Logger for diagnostic messages
Opens a socket which allows a worker to communicate with the Horde Agent
Name of the command buffer
Logger for diagnostic messages
Operates a server that a child process can open a to.
Name of the buffer to pass via
Constructor
Creates a new server for
Socket to connect to
Logger for errors
New server instance
Manages a set of readers and writers to buffers across a transport layer
Constructor
Transport to communicate with the remote
The protocol version number
Logger for trace output
Attempt to gracefully close the current connection and shutdown both ends of the transport
Sends a keep alive message to the remote machine. Does not wait for a response. Designed to keep a connection open when the remote is eagerly trying to close it.
Cancellation token for the operation
Low-level interface for transferring data
Writes data to the underlying transport
Buffer to be written
Cancellation token for the operation
Reads data from the underlying transport into an output buffer
Buffer to read into
Cancellation token for the operation
Indicate that all data has been read and written to the transport layer, and that there will be no more calls to send/recv
Cancellation token for the operation
Fill the given buffer with data
Buffer to read into
Cancellation token for the operation
Fill the given buffer with data
Buffer to read into
Cancellation token for the operation
Writes data to the underlying transport
Buffer to be written
Cancellation token for the operation
Exception for external IP resolver
Find public IP address of local machine by querying a third-party IP lookup service
Cache of last resolved IP address. Once resolved, the address is cached for the lifetime of this class.
Constructor
Get the external, public-facing IP of local machine
Cancellation token
External IP address
If unable to resolve
Interface for uploading compute work to remote machines
Find the most suitable cluster to execute a given compute assignment request
Requirements for the agent
Optional ID identifying the request over multiple calls, such as retrying the same request
Optional preference of connection details
Logger for output from this worker
Cancellation token for the operation
Adds a new remote request
Optional cluster ID. If not set, cluster will automatically be resolved by server
Requirements for the agent
Optional ID identifying the request over multiple calls, such as retrying the same request
Optional preference of connection details
Logger for output from this worker
Cancellation token for the operation
Declare resource needs for current client
Helps inform the server about current demand.
Can be called as often as necessary to keep needs up-to-date.
Cluster to execute the request
Which pool this applies to
Properties with a target amount of each, such as CPU or RAM
Cancellation token for the operation
Extension methods for
Exception from ComputeClient
Full-duplex channel for sending and receiving messages
Compute cluster ID
Properties of the remote machine
Resources assigned to this lease
Socket to communicate with the remote
IP address of the remote agent machine running the compute task
When using relay connection mode, this may be the IP of the relay rather than the remote machine itself.
How to establish a connection to the remote machine (when not using the default socket)
Assigned ports (externally visible port -> local port on agent)
Key is an arbitrary name identifying the port (same as was given when requesting the lease>)
When relay mode is used, ports can mapped to a different externally visible port.
If compute task uses and listens to port 7000, that port can be externally represented as something else.
For example, port 32743 can be pointed to port 7000.
This makes no difference for the compute task process, but the client/initiator making connections must
pay attention to this mapping.
Relinquish the lease gracefully
Cancellation token for the operation
Manages and hands out request IDs for compute allocation requests
Constructor
Starts a new batch of requests.
Any requests started during current batch will be reset and marked as unfinished.
Get or create a request ID and mark it as part of current batch
A request ID
Mark a request Id as accepted. It won't be re-used for any future requests.
Request ID to mark as finished
Requirements for a compute task to be assigned an agent
Pool of machines to draw from
Condition string to be evaluated against the machine spec, eg. cpu-cores >= 10 && ram.mb >= 200 && pool == 'worker'
Properties required from the remote machine
Resources used by the process
Whether we require exclusive access to the device
Default constructor
Construct a requirements object with a condition
Condition for matching machines to execute the work
Specifies requirements for resource allocation
Minimum allocation of the requested resource
Maximum allocation of the requested resource. Allocates as much as possible unless capped.
Transport layer that adds AES encryption on top of an underlying transport implementation.
Key must be exchanged separately (e.g. via the HTTPS request to negotiate a lease with the server).
Length of the required encryption key.
Length of the nonce. This should be a cryptographically random number, and does not have to be secret.
Constructor
The underlying transport implementation that will be encrypted
Encryption key. Should be generated with a cryptographically secure random number generator
Whether to dispose the inner transport when this instance is disposed
Default receive buffer size for data that needs to buffered for next read. Will automatically grow
Create a random key for this transport
A cryptographically random key
Compute transport which wraps another transport acting as a watchdog timer for inactivity
If no send or receive activity has been seen within specified timeout, the cancellation source will be triggered
Timeout before triggering a cancellation
Constructor
Transport to watch
Timeout before cancelling
Time since last send or receive completed
Start a loop monitoring activity on the inner transport
Source to cancel when transport times out
Logger
Cancellation token
Implementation of for communicating over a .
(Note: this uses a .NET in-process pipe, not an IPC pipe).
Constructor
Reader for the pipe
Writer for the pipe
Compute transport which wraps an underlying stream
Constructor
Stream to use for the transferring data
Whether to leave the inner stream open when disposing
Implementation of for communicating over a socket using SSL/TLS
Constructor
Socket to communicate over
Certificate used for auth on both server and client
Whether socket is acting as a server or client
Constructor
Socket to communicate over
Certificate used for auth on both server and client
Whether socket is acting as a server or client
Perform SSL authentication
Checks the certificate returned by the server is indeed the correct one
True if it matches
Generate a self-signed certificate to be used for communicating between client and server of this transport
A X509 certificate serialized as bytes
Implementation of for communicating over a socket
Constructor
Socket to communicate over
Setting information required by dashboard
The name of the external issue service
The url of the external issue service
The url of the perforce swarm installation
Url of Robomergem installation
Help email address that users can contact with issues
Help slack channel that users can use for issues
The auth method in use
Device problem cooldown in minutes
Categories to display on the agents page
Categories to display on the pools page
Configured artifact types
Describes a category for the pools page
Title for the tab
Condition for pools to be included in this category
Describes a category for the agents page
Title for the tab
Condition for agents to be included in this category
A summary of what the preview item changes
The CL the preview was deployed in
An example of the preview site users can view the changes
Optional Link for discussion the preview item
Optional Link for discussing the preview item
The preview item to update
A summary of what the preview item changes
The CL the preview was deployed in
Whather the preview is under consideration, if false the preview item didn't pass muster
An example of the preview site users can view the changes
Optional Link for discussion the preview item
Optional Link for discussing the preview item
Dashboard preview item response
The unique ID of the preview item
When the preview item was created
A summary of what the preview item changes
The CL the preview was deployed in
Whather the preview is under consideration, if false the preview item didn't pass muster
An example of the preview site users can view the changes
Optional Link for discussion the preview item
Optional Link for discussing the preview item
Dashboard challenge response
Whether first time setup needs to run
Whether the user needs to authorize
Identifier for a pool
Id to construct from
Identifier for a pool
Id to construct from
Id to construct from
Constructor
Converter to and from instances.
The type of device pool
Available to CIS jobs
Shared by users with remote checking and checkouts
Create device platform request
The name of the platform
Create device platform response object
Id of newly created platform
Constructor
Update requesty object for a device platform
The vendor model ids for the platform
Get object response which describes a device platform
Unique id of device platform
Friendly name of device platform
Platform vendor models
Response constructor
Device pool creation request object
The name for the new pool
The name for the new pool
Projects associated with this device pool
Device pool update request object
Id of the device pool to update
Projects associated with this device pool
Device pool creation response object
Id of the newly created device pool
Constructor
Device pool response object
Id of the device pool
Name of the device pool
Type of the device pool
Whether there is write access to the pool
Constructor
Device creation request object
The platform of the device
The pool to assign the device
The friendly name of the device
Whether to create the device in enabled state
The network address of the device
The vendor model id of the device
Device creation response object
The id of the newly created device
Constructor
Get response object which describes a device (DEPRECATED)
The job id which utilized device
The job's step id
The time device was reserved
The time device was freed
Get response object which describes a device
The unique id of the device
The platform of the device
The pool the device belongs to
The friendly name of the device
Whether the device is currently enabled
The address of the device (if it allows network connections)
The vendor model id of the device
Any notes provided for the device
If the device has a marked problem
If the device is in maintenance mode
The user id that has the device checked out
The last time the device was checked out
When the checkout will expire
The last user to modifiy the device
Device Utilization data
Device response constructor
Device update request object
The device pool id
The device name
IP address or hostname of device
Device vendor model id
Markdown notes
Whether device is enabled
Whether the device is in maintenance mode
Whether to set or clear any device problem state
Device checkout request object
Whether to checkout or in the device
Device reservation request object
Device reservation platform id
The optional vendor model ids to include for this device
The optional vendor model ids to exclude for this device
Reservation request object
What pool to reserve devices in
Devices to reserve
Device reservation response object
The reservation id of newly created reservation
The devices that were reserved
A reservation containing one or more devices
Randomly generated unique id for this reservation
Which device pool the reservation is in
The reserved devices
JobID holding reservation
Job step id holding reservation
Job mame holding reservation
Job step holding reservation
Reservations held by a user, requires a token
The hostname of machine holding reservation
The optional reservation details
The UTC time when the reservation was created
The legacy reservation system guid, to be removed once can update Gauntlet client in all streams
Device telemetry respponse
The UTC time the telemetry data was created
The stream id which utilized device
The job id which utilized device
The job name which utilized device
The job's step id
The job name which utilized device
If this telemetry has a reservation, the start time of the reservation
If this telemetry has a reservation, the finish time of the reservation
If this telemetry marks a detected device issue, the time of the issue
Device telemetry respponse
The device id for the telemetry data
Individual telemetry data points
Constructor
Stream device telemetry for pool snapshot
Device id for reservation
Job id associated with reservation
The step id of reservation
The name of the job holding reservation
The name of the step holding reservation
constructor
Device telemetry respponse
The corresponding platform id
Available devices of this platform
Devices in maintenance state
Devices in problem state
Number of devices in disabled state
Reserved devices
Constructor
Device telemetry respponse
The UTC time the telemetry data was created
Individual pool telemetry data points
Constructor
Reservation request for legacy clients
The device types to reserve, these are mapped to platforms
The hostname of machine reserving devices
The duration of reservation
Reservation details string
The PoolId of reservation request
The JobId of reservation request
The StepId of reservation request
A specific device to reserve
Reservation response for legacy clients
The names of the devices that were reserved
The corresponding perf specs of the reserved devices
The corresponding perf specs of the reserved devices
The host name of the machine making the reservation
The start time of the reservation
The duration of the reservation (before renew)
The JobId of reservation request
The StepId of reservation request
The job name of reservation request
The step name of reservation request
The legacy guid of the reservation
The step name of reservation request
Device response for legacy clients
The id of the reserved device
The name of the reserved device
The (legacy) type of the device, mapped from platform id
The IP or hostname of device
The (legacy) perf spec of the device
The device model information
The available start time which is parsed client side
The available end time which is parsed client side
Whether device is enabled
Associated device data
Identifier for a device platform
Id to construct from
Identifier for a device platform
Id to construct from
Id to construct from
Constructor
Converter to and from instances.
Identifier for a device pool
Id to construct from
Identifier for a device pool
Id to construct from
Id to construct from
Constructor
Converter to and from instances.
Version number for the Horde public API. Can be retrieved from the /api/v1/server/info endpoint via the
response. Should be serialized as an integer in messages to allow
clients with missing enum names to still parse the result correctly.
Unknown version
Initial version
Interior nodes in chunked data now include the length of chunked data to allow seeking.
Add support for last modified timestamps to file entries in directory nodes
Interior nodes in chunked data now include the rolling hash of any leaf nodes
One past the latest known version number. Add new version numbers above this point.
Latest API version
Converter for which forces serialization as an integer, to override any
default that may be enabled.
Default implementation of
Accessor for the logger instance
Constructor
Notify listeners that the auth state has changed
Creates an http client for satisfying requests
Creates an http client for satisfying requests
Default implementation of
Constructor
Default implementation of
Constructor
Allows creating instances
Constructor
Create a client using the user's default access token
Extension methods for Horde
Adds Horde-related services with the default settings
Collection to register services with
Adds Horde-related services
Collection to register services with
Callback to configure options
HTTP message handler which automatically refreshes access tokens as required
Option for HTTP requests that can override the default behavior for whether to enable interactive auth prompts
Constructor
Shared object used to track the latest access obtained token
Event handler for the auth state changing
Constructor
Checks if we have a valid auth header at the moment
Gets the current access token
Gets the current auth state instance. Fails if the current auth task has not finished.
Resets the current auth state
Marks the given access token as invalid, having attempted to use it and got an unauthorized response
The access header to invalidate
Try to get a configured auth header
Gets the current access token
Wraps an Http client which communicates with the Horde server
Name of an environment variable containing the Horde server URL
Name of an environment variable containing a token for connecting to the Horde server
Name of clients created from the http client factory
Name of clients used for anonymous requests.
Name of clients created from the http client factory for handling upload redirects. Should not contain Horde auth headers.
Accessor for the inner http client
Base address for the Horde server
Constructor
The inner HTTP client instance
Configures a JSON serializer to read Horde responses
options for the serializer
Check account login status.
Creates a new artifact
Name of the artifact
Additional search keys tagged on the artifact
Description for the artifact
Stream to create the artifact for
Commit for the artifact
Keys used to identify the artifact
Metadata for the artifact
Cancellation token for the operation
Deletes an artifact
Identifier for the artifact
Cancellation token for the operation
Gets metadata about an artifact object
Identifier for the artifact
Cancellation token for the operation
Gets a zip stream for a particular artifact
Identifier for the artifact
Cancellation token for the operation
Finds artifacts with a certain type with an optional streamId
Stream to look for the artifact in
The minimum change number for the artifacts
The minimum change number for the artifacts
Name of the artifact
Type to find
Keys for artifacts to return
Maximum number of results to return
Cancellation token for the operation
Information about all the artifacts
Finds artifacts with a certain type with an optional streamId
Identifiers to return
Stream to look for the artifact in
The minimum change number for the artifacts
The minimum change number for the artifacts
Name of the artifact
Type to find
Keys for artifacts to return
Maximum number of results to return
Cancellation token for the operation
Information about all the artifacts
Create a new dashboard preview item
Request to create a new preview item
Cancellation token for the operation
Config information needed by the dashboard
Update a dashboard preview item
Config information needed by the dashboard
Query dashboard preview items
Config information needed by the dashboard
Query parameters for other tools
Cancellation token for the operation
Parameters for other tools
Query parameters for other tools
Path for properties to return
Cancellation token for the operation
Information about all the projects
Query all the projects
Whether to include streams in the response
Whether to include categories in the response
Cancellation token for the operation
Information about all the projects
Retrieve information about a specific project
Id of the project to get information about
Cancellation token for the operation
Information about the requested project
Query all the secrets available to the current user
Cancellation token for the operation
Information about all the projects
Retrieve information about a specific project
Id of the secret to retrieve
Cancellation token for the operation
Information about the requested project
Gets information about the currently deployed server version
Cancellation token for the operation
Information about the deployed server instance
Attempts to read a named storage ref from the server
Path to the ref
Max allowed age for a cached value to be returned
Cancellation token for the operation
Gets telemetry for Horde within a given range
End date for the range
Number of hours to return
Timezone offset
Cancellation token for the operation
Enumerates all the available tools.
Gets information about a particular tool
Gets information about a particular deployment
Gets a zip stream for a particular deployment
Creates a new tool deployment
Id for the tool
Version string for the new deployment
Duration over which to deploy the tool
Whether to create the deployment, but do not start rolling it out yet
Location of a directory node describing the deployment
Cancellation token for the operation
Gets job information for given job ID. Fail response if jobID does not exist.
Id of the job to get infomation for
Cancellation token for the operation
Get the given log file
Id of the log file to retrieve
Text to search for in the log
Number of lines to return (default 5)
Cancellation token for the operation
Get the requested number of lines from given logFileId, starting at index
Id of log file to retrieve lines from
Start index of lines to retrieve
Number of lines to retrieve
Cancellation token for the operation
Get graph of the given job
Contains buildgraph information for the job
Concrete implementation of which manages the lifetime of the instance.
Constructor
Create a default timeout retry policy
Create a default timeout retry policy
Static helper methods for implementing Horde HTTP requests with standard semantics
Create the shared instance of JSON options for HordeHttpClient instances
Configures a JSON serializer to read Horde responses
options for the serializer
Deletes a resource from an HTTP endpoint
Http client instance
The url to retrieve
Cancels the request
Gets a resource from an HTTP endpoint and parses it as a JSON object
The object type to return
Http client instance
The url to retrieve
Cancels the request
New instance of the object
Posts an object to an HTTP endpoint as a JSON object, and parses the response object
The object type to post
Http client instance
The url to retrieve
The object to post
Cancels the request
The response parsed into the requested type
Posts an object to an HTTP endpoint as a JSON object, and parses the response object
The object type to return
The object type to post
Http client instance
The url to retrieve
The object to post
Cancels the request
The response parsed into the requested type
Puts an object to an HTTP endpoint as a JSON object
The object type to post
Http client instance
The url to write to
The object to post
Cancels the request
Response message
Puts an object to an HTTP endpoint as a JSON object
The object type to return
The object type to post
Http client instance
The url to write to
The object to post
Cancels the request
Response message
Options for configuring the Horde connection
Address of the Horde server
Access token to use for connecting to the server
Whether to allow opening a browser window to prompt for authentication
Options for creating new bundles
Options for caching bundles
Options for the storage backend cache
Gets the configured server URL, or the default value
Reads the server URL from the environment
Gets the default server URL for the current user
Default URL
Sets the default server url for the current user
Horde server URL to use
Options for the storage backend cache
Directory to store cached data
Maximum size of the cache, in bytes
Interface for Horde functionality.
URL of the horde server
Accessor for the artifact collection
Accessor for the compute client
Accessor for the project collection
Accessor for the secret collection
Accessor for the tools collection
Event triggered whenever the access token state changes
Connect to the Horde server
Whether to allow prompting for credentials
Cancellation token for the operation
True if the connection succeded
Gets the current connection state
Gets an access token for the server
Gets a gRPC client interface
Creates a Horde HTTP client
Creates a storage namespace for the given base path
Creates a logger device that writes data to the server
Extension methods for
Creates a storage namespace for a particular id
Creates a storage namespace for a particular artifact
Creates a storage namespace for a particular log
Creates a storage namespace for a particular tool
Reads a blob storage ref from a path
Reads a typed blob storage ref from a path
Provides access to a instances for Horde with a default resiliance pipeline.
Instance of the http message handler
Instance of a particular BuildGraph script error
Determines if the given event id matches
The event id to compare
True if the given event id matches
Instance of a particular compile error
Annotation describing the compile type
Annotation specifying a group for compile issues from this node
Constructor
Determines if the given event id matches
The event id to compare
True if the given event id matches
Instance of a particular compile error
Instance of a particular compile error
Default handler for log events not matched by any other handler
Constructor
Handler for log events with issue fingerprints embedded in the structured log data itself
Instance of a particular Gauntlet error
Prefix for framework keys
Prefix for test keys
Prefix for device keys
Prefix for build drop keys
Prefix for fatal failure keys
Callstack log type property
Summary log type property
Max Message Length to hash
Known Gauntlet events
Constructor
Determines if the given event id matches
The event id to compare
True if the given event id matches
Return the prefix string associate with the event id
The event id to get the information from
The corresponding prefix as a string
Produce a hash from error message
The issue event
Receives a set of the keys
Receives a set of metadata
Set true if a callstack property was found
Instance of a particular compile error
Known general events
Constructor
Determines if the given event is general and should be salted to make it unique
The event id to compare
True if the given event id matches
Instance of a localization error
Determines if the given event id matches
The event id to compare
True if the given event id matches
Determines if an event should be masked by this
Extracts a list of source files from an event
The event data
List of source files
Instance of a Perforce case mismatch error
Determines if the given event id matches
The event id to compare
True if the given event id matches
Instance of a specific Thread Sanitizer error
Log value describing the thread sanitizer error summary reason
Log value describing the thread sanitizer error summary reason
Log value describing the thread sanitizer error summary source file
Instance of a particular compile error
Constructor
Instance of a particular shader compile error
Instance of a particular compile error
Determines if the given event id matches
The event id to compare
True if the given event id matches
Determines if an event should be masked by this
Instance of a particular systemic error
Known systemic errors
Determines if the given event id matches
The event id to compare
True if the given event id matches
Constructor
Instance of a Perforce case mismatch error
Fingerprint for an issue
The type of issue, which defines the handler to use for it
Template string for the issue summary
List of keys which identify this issue.
Set of keys which should trigger a negative match
Collection of additional metadata added by the handler
Filter for changes that should be included in this issue
Extension methods for
Checks if a fingerprint matches another fingerprint
The first fingerprint to compare
The other fingerprint to compare to
True is the fingerprints match
Checks if a fingerprint matches another fingerprint for creating a new span
The first fingerprint to compare
The other fingerprint to compare to
True is the fingerprints match
Set of rules for filtering the list of suspects for an issue
Filter exclude all changes
Filter including all changes
Set of extensions to treat as code
Set of file extensions to treat as content
Wraps a log event and allows it to be tagged by issue handlers
Index of the line within this log
Severity of the event
The type of event
Gets this event data as a BSON document
Constructor
Renders the entire message of this event
A group of objects with their fingerprint
The type of issue, which defines the handler to use for it
Template string for the issue summary
List of keys which identify this issue.
Collection of additional metadata added by the handler
Filter for changes that should be included in this issue
Individual log events
Constructor
The type of issue
Template for the summary string to display for the issue
Filter for changes covered by this issue
Constructor
The type of issue
Template for the summary string to display for the issue
Filter for changes covered by this issue
Extension methods for logging issue data
Adds an issue fingerprint to a log event
Log event to modify
Fingerprint for the issue
Enters a scope which annotates log messages with the supplied issue fingerprint
Logger device to operate on
Fingerprint for the issue
Disposable object for the lifetime of this scope
Fingerprint for an issue. Can be embedded into a structured log event under the "$issue" property to guide how Horde should group the event.
The type of issue, which defines the handler to use for it
Template string for the issue summary
List of keys which identify this issue.
Set of keys which should trigger a negative match
Collection of additional metadata added by the handler
Filter for changes that touch files which should be included in this issue
Keys set which is null when empty
Metadata set which is null when empty
Default constructor
Constructor
Marks an issue handler that should be automatically inserted into the pipeline
Class of handler which can be explicitly enabled via a workflow
Interface for issue matchers
Priority value for this issue handler. Handlers are processed in order of decreasing priority.
Attempts to assign a log event to an issue
Events to process
Issue definition for this log event
Gets all the issues created by this handler
Context object for issue handlers
Identifier for the current stream
Identififer of the template
Identifier for the current node name
Annotations for this node
Constructor
Type of a key in an issue, used for grouping.
Unknown type
Filename
Secondary file
Name of a symbol
Hash of a particular error
Identifier for a particular step
Defines a key which can be used to group an issue with other issues
Name of the key
Type of the key
Arbitrary string that can be used to discriminate between otherwise identical keys, limiting the issues that it can merge with.
Constructor
Constructor
Creates an issue key for a file
Creates an issue key for a file
Creates an issue key for a particular hash
Creates an issue key for a particular step
Creates an issue key for a particular step and severity
Extension methods for issue keys
Adds a new entry to a set
Adds all the assets from the given log event
Set of keys
The log event to parse
Extracts a list of source files from an event
Set of keys
The event data
Extracts a list of source files from an event
Set of keys
The event hash
Scope for merging this hash value
Extracts a list of source files from an event
Set of keys
The event data
Add a new source file to a list of unique source files
List of source files
File to add
Type of key to add
Parses symbol names from a log event
List of source files
The log event data
The severity of an issue
Unspecified severity
This error represents a warning
This issue represents an error
Identifies a particular changelist and job
The changelist number
The commit for this step
Severity of the issue in this step
Name of the job containing this step
The unique job id
The unique batch id
The unique step id
Time at which the step ran
The unique log id
Trace of a set of node failures across multiple steps
Unique id of this span
The template containing this step
Name of the step
Workflow that this span belongs to
The previous build
The failing builds for a particular event
The following successful build
Information about a particular step
Unique id of the stream
Minimum commit affected by this issue (ie. last successful build)
Maximum commit affected by this issue (ie. next successful build)
Minimum changelist affected by this issue (ie. last successful build)
Maximum changelist affected by this issue (ie. next successful build)
Map of steps to (event signature id -> trace id)
Outcome of a particular build
Unknown outcome
Build succeeded
Build failed
Build finished with warnings
Information about a template affected by an issue
The template id
The template name
Whether it has been resolved or not
The issue severity of the affected template
Summary for the state of a stream in an issue
Id of the stream
Name of the stream
Whether the issue has been resolved in this stream
The affected templates
List of affected template ids
List of resolved template ids
List of unresolved template ids
Stores information about a build health issue
The unique object id
Time at which the issue was created
Time at which the issue was retrieved
The associated project for the issue
The summary text for this issue
Detailed description text
Description of the current fingerprint used for issue identification
Severity of this issue
Whether the issue is promoted
Owner of the issue [DEPRECATED]
User id of the owner [DEPRECATED]
Owner of the issue
User that nominated the current owner [DEPRECATED]
Owner of the issue
Time that the issue was acknowledged
Perforce changelist that fixed this issue
Commit that fixed this issue
Whether the issue is marked fixed as a systemic issue
Time at which the issue was resolved
Name of the user that resolved the issue [DEPRECATED]
User id of the person that resolved the issue [DEPRECATED]
User that resolved the issue
Time at which the issue was verified
Time that the issue was last seen
List of stream paths affected by this issue
List of affected stream ids
List of unresolved streams
List of affected streams
Most likely suspects for causing this issue [DEPRECATED]
User ids of the most likely suspects [DEPRECATED]
Most likely suspects for causing this issue
Whether to show alerts for this issue
Key for this issue in external issue tracker
User who quarantined the issue
The UTC time when the issue was quarantined
User who force closed the issue
The workflow thread url for this issue
Information about a span within an issue
Unique id of this span
The template containing this step
Name of the step
Workflow for this span
The previous build
The following successful build
Stores information about a build health issue
The unique object id
Time at which the issue was created
Time at which the issue was retrieved
The associated project for the issue
The summary text for this issue
Detailed description text
Severity of this issue
Severity of this issue in the stream
Whether the issue is promoted
Owner of the issue
Owner of the issue
Time that the issue was acknowledged
Changelist that fixed this issue
Changelist that fixed this issue
Whether the issue is marked fixed as a systemic issue
Time at which the issue was resolved
User that resolved the issue
Time at which the issue was verified
Time that the issue was last seen
Spans for this issue
Key for this issue in external issue tracker
User who quarantined the issue
The UTC time when the issue was quarantined
The workflow thread url for this issue
Workflows for which this issue is open
Request an issue to be updated
Summary of the issue
Description of the issue
Whether the issue is promoted or not
New user id for owner of the issue, can be cleared by passing empty string
User id that nominated the new owner
Whether the issue has been acknowledged
Whether the user has declined this issue
The change at which the issue is claimed fixed. 0 = not fixed, -1 = systemic issue.
The change at which the issue is claimed fixed. """" = not fixed.
Set to mark the issue as fixed systemically
Whether the issue should be marked as resolved
List of spans to add to this issue
List of spans to remove from this issue
A key to issue in external tracker
Id of user quarantining issue
Id of user who is forcibly closing this issue, skipping verification checks. This is useful for when a failing step has been removed for example
External issue project information
The project key
The name of the project
The id of the project
component id => name
IssueType id => name
Marks an issue as fixed by another user. Designed for use from a Perforce trigger.
Name of the user that fixed the issue
Change that fixed the issue
Request an issue to be created on external issue tracking system
Horde issue which is linked to external issue
StreamId of a stream with this issue
Summary text for external issue
External issue project id
External issue component id
External issue type id
Optional description text for external issue
Optional link to issue on Horde
Response for externally created issue
External issue key
Link to issue on external tracking site
Constructor
External issue response object
The external issue key
The issue link on external tracking site
The issue status name, "To Do", "In Progress", etc
The issue resolution name, "Fixed", "Closed", etc
The issue priority name, "1 - Critical", "2 - Major", etc
The current assignee's user name
The current assignee's display name
The current assignee's email address
Entry in an issue's metadata collection. Implemented as a case-insensitive key and value
Key for the metadata item
Value for the metadata
Constructor
Extension methods for
Adds a new entry to a set
Gets all the metadata values with a given key
Set of entries to search
Key name to search for
All values with the given key
Configuration for an issue workflow
Identifier for this workflow
Times of day at which to send a report
Name of the tab to post summary data to
Channel to post summary information for these templates.
Whether to include issues with a warning status in the summary
Whether to group issues by template in the report
Channel to post threads for triaging new issues
Prefix for all triage messages
Suffix for all triage messages
Instructions posted to triage threads
User id of a Slack user/alias to ping if there is nobody assigned to an issue by default.
Slack user/alias to ping for specific issue types (such as Systemic), if there is nobody assigned to an issue by default.
Alias to ping if an issue has not been resolved for a certain amount of time
Times after an issue has been opened to escalate to the alias above, in minutes. Continues to notify on the last interval once reaching the end of the list.
Maximum number of people to mention on a triage thread
Whether to mention people on this thread. Useful to disable for testing.
Uses the admin.conversations.invite API to invite users to the channel
Skips sending reports when there are no active issues.
Whether to show warnings about merging changes into the origin stream.
Additional node annotations implicit in this workflow
External issue tracking configuration for this workflow
Additional issue handlers enabled for this workflow
External issue tracking configuration for a workflow
Project key in external issue tracker
Default component id for issues using workflow
Default issue type id for issues using workflow
Identifier for a workflow
Id to construct from
Identifier for a workflow
Id to construct from
Id to construct from
Empty workflow id constant
Constructor
Converter to and from instances.
Configuration for an issue workflow
Constructor
Constructor
External issue tracking configuration for a workflow
Constructor
Constructor
Identifier for a job
Id to construct from
Identifier for a job
Id to construct from
Id to construct from
Converter to and from instances.
Information required to create a node
The name of this node
Inputs for this node
Output from this node
Indices of nodes which must have succeeded for this node to run
Indices of nodes which must have completed for this node to run
The priority of this node
Whether this node can be retried
This node can start running early, before dependencies of other nodes in the same group are complete
Whether to include warnings in diagnostic output
Average time to execute this node based on historical trends
Credentials required for this node to run. This dictionary maps from environment variable names to a credential property in the format 'CredentialName.PropertyName'.
Properties for this node
Annotations for this node
Constructor
Information about a group of nodes
The type of agent to execute this group
Nodes in the group
Constructor
Information about an aggregate
Name of the aggregate
Nodes which must be part of the job for the aggregate to be shown
Constructor
Information about a label
Category of the aggregate
Label for this aggregate
Name to show for this label on the dashboard
Category to show this label in on the dashboard
Name to show for this label in UGS
Project to display this label for in UGS
Nodes which must be part of the job for the aggregate to be shown
Nodes to include in the status of this aggregate, if present in the job
Information about a graph
The hash of the graph
Array of nodes for this job
List of aggregates
List of labels for the graph
Constructor
A unique dependency graph instance
Hash of this graph
Schema version for this document
List of groups for this graph
List of aggregates for this graph
Status labels for this graph
Artifacts for this graph
Extension methods for graphs
Gets the node from a node reference
The graph instance
The node reference
The node for the given reference
Tries to find a node by name
The graph to search
Name of the node
Receives the node reference
True if the node was found, false otherwise
Tries to find a node by name
The graph to search
Name of the node
Receives the node
True if the node was found, false otherwise
Tries to find a node by name
The graph to search
Name of the node
Receives the aggregate index
True if the node was found, false otherwise
Tries to find a node by name
The graph to search
Name of the node
Receives the aggregate
True if the node was found, false otherwise
Gets a list of dependencies for the given node
The graph instance
The node to return dependencies for
List of dependencies
Represents a node in the graph
The name of this node
References to inputs for this node
List of output names
Indices of nodes which must have succeeded for this node to run
Indices of nodes which must have completed for this node to run
The priority that this node should be run at, within this job
Whether this node can be run multiple times
This node can start running early, before dependencies of other nodes in the same group are complete
Whether to include warnings in the output (defaults to true)
List of credentials required for this node. Each entry maps an environment variable name to a credential in the form "CredentialName.PropertyName".
Properties for this node
Annotations for this node
Information about a sequence of nodes which can execute on a single agent
The type of agent to execute this group
Nodes in this group
Reference to a node within another grup
The group index of the referenced node
The node index of the referenced node
Private constructor for serialization
Constructor
Index of thr group containing the node
Index of the node within the group
Converts this reference to a node name
List of groups that this reference points to
Name of the referenced node
Output from a node
Node producing the output
Index of the output
Constructor
An collection of node references
Name of the aggregate
List of nodes for the aggregate to be valid
Change at which to display a label
The current changelist
The last code changelist
Label indicating the status of a set of nodes
Label to show in the dashboard. Null if does not need to be shown.
Category for the label. May be null.
Name to display for this label in UGS
Project which this label applies to, for UGS
Which change to display the label on
List of required nodes for the aggregate to be valid
List of optional nodes to include in the aggregate state
Extension methods for ILabel
Enumerate all the required dependencies of this node group
The label instance
List of groups for the job containing this aggregate
Sequence of nodes
Artifact produced by a graph
Name of the artifact
Type of the artifact
Description for the artifact
Base path for files in the artifact
Keys for finding the artifact
Metadata for the artifact
Name of the node producing this artifact
Tag for the artifact files
Information required to create a node
The name of this node
Input names
Output names
List of nodes which must succeed for this node to run
List of nodes which must have completed for this node to run
The priority of this node
This node can be run multiple times
This node can start running early, before dependencies of other nodes in the same group are complete
Whether to include warnings in the diagnostic output
Credentials required for this node to run. This dictionary maps from environment variable names to a credential property in the format 'CredentialName.PropertyName'.
Properties for this node
Additional user annotations for this node
Constructor
Name of the node
List of inputs for the node
List of output names for the node
List of nodes which must have completed succesfully for this node to run
List of nodes which must have completed for this node to run
Priority of this node
Whether the node can be run multiple times
Whether the node can run early, before dependencies of other nodes in the same group complete
Whether to include warnings in the diagnostic output (defaults to true)
Credentials required for this node to run
Properties for the node
User annotations for this node
Constructor
Existing graph containing a node
Node to copy
Information about a group of nodes
The type of agent to execute this group
Nodes in the group
Constructor
The type of agent to execute this group
Nodes in this group
Constructor
Graph containing the node group
Node group to copy
Information about a group of nodes
Category for this label
Name of the aggregate
Name of the aggregate
Category for this label
Name of the badge in UGS
Project to show this label for in UGS
Which change the label applies to
Nodes which must be part of the job for the aggregate to be valid
Nodes which must be part of the job for the aggregate to be valid
Information about a group of nodes
Name of the aggregate
Nodes which must be part of the job for the aggregate to be valid
Constructor
Name of this aggregate
Nodes which must be part of the job for the aggregate to be shown
Information about an artifact
Information about an artifact
Interface which wraps a generic key/value dictionary to provide specific node annotations
Workflow to use for triaging issues from this node
Whether to create issues for this node
Whether to automatically assign issues that could only be caused by one user, or have a well defined correlation with a modified file.
Automatically assign any issues to the given user
Whether to notify all submitters between a build suceeding and failing, allowing them to step forward and take ownership of an issue.
Key to use for grouping issues together, preventing them being merged with other groups
Whether failures in this node should be flagged as build blockers
Set of annotations for a node
Empty annotation dictionary
Constructor
Constructor
Merge in entries from another set of annotation
Document describing a job
Job argument indicating a target that should be built
Name of the node which parses the buildgraph script
Identifier for the job. Randomly generated.
The stream that this job belongs to
The template ref id
The template that this job was created from
Hash of the graph definition
Graph for this job
Id of the user that started this job
Id of the user that aborted this job. Set to null if the job is not aborted.
Optional reason for why the job was canceled
Identifier of the bisect task that started this job
Name of the job.
The commit to build
The code commit for this build
The preflight changelist number
Description for the shelved change if running a preflight
Priority of this job
For preflights, submit the change if the job is successful
The submitted changelist number
Message produced by trying to auto-submit the change
Whether to update issues based on the outcome of this job
Whether to promote issues by default based on the outcome of this job
Time that the job was created (in UTC)
Options for executing the job
Claims inherited from the user that started this job
Array of jobstep runs
Parameters for the job
Optional user-defined properties for this job
Custom list of targets for the job. If null or empty, the list of targets is determined from the command line.
Additional arguments for the job, when a set of parameters are applied.
Environment variables for the job
Issues associated with this job
Unique id for notifications
Whether to show badges in UGS for this job
Whether to show alerts in UGS for this job
Notification channel for this job.
Notification channel filter for this job.
Mapping of label ids to notification trigger ids for notifications
List of reports for this step
List of downstream job triggers
The last update time
Update counter for this document. Any updates should compare-and-swap based on the value of this counter, or increment it in the case of server-side updates.
Gets the latest job state
Cancellation token for the operation
Attempt to get a batch with the given id
The job batch id
Receives the batch interface on success
True if the batch was found
Attempt to get a step with the given id
The job step id
Receives the step interface on success
True if the step was found
Attempt to delete the job
Cancellation token for the operation
True if the job was deleted. False if the job is not the latest revision.
Removes a job from the dispatch queue. Ignores the state of any batches still remaining to execute. Should only be used to correct for inconsistent state.
Cancellation token for the operation
Updates a new job
Name of the job
Priority of the job
Automatically submit the job on completion
Changelist that was automatically submitted
Name of the user that aborted the job
Id for a notification trigger
New reports
New arguments for the job
New trigger ID for a label in the job
New downstream job id
Optional reason why the job was canceled
Cancellation token for the operation
Updates the state of a batch
Unique id of the batch to update
The new log file id
New state of the jobstep
Error code for the batch
Cancellation token for the operation
True if the job was updated, false if it was deleted
Update a jobstep state
Unique id of the batch containing the step
Unique id of the step to update
New state of the jobstep
New outcome of the jobstep
New error annotation for this jobstep
New state of request abort
New name of user that requested the abort
New log id for the jobstep
New id for a notification trigger
Whether the step should be retried
New priority for this step
New report documents
Property changes. Any properties with a null value will be removed.
The reason the job step was canceled
Cancellation token for the operation
True if the job was updated, false if it was deleted in the meantime
Attempts to update the node groups to be executed for a job. Fails if another write happens in the meantime.
New graph for this job
Cancellation token for the operation
True if the groups were updated to the given list. False if another write happened first.
Marks a job as skipped
Reason for this batch being failed
Cancellation token for the operation
Updated version of the job
Marks a batch as skipped
The batch to mark as skipped
Reason for this batch being failed
Cancellation token for the operation
Updated version of the job
Abort an agent's lease, and update the payload accordingly
Index of the batch to cancel
Reason for this batch being failed
Cancellation token for the operation
True if the job is updated
Attempt to assign a lease to execute a batch
Index of the batch
The pool id
New agent to execute the batch
Session of the agent that is to execute the batch
The lease unique id
Unique id of the log for the batch
Cancellation token for the operation
True if the batch is updated
Cancel a lease reservation on a batch (before it has started)
Index of the batch to cancel
Cancellation token for the operation
True if the job is updated
Extension methods for jobs
Gets the current job state
The job document
Job state
Gets the outcome for a particular named target. May be an aggregate or node name.
The job to check
The step outcome
Gets the outcome for a particular named target. May be an aggregate or node name.
The job to check
Graph for the job
Target to find an outcome for
The step outcome
Gets the outcome for a particular named target. May be an aggregate or node name.
Steps to include
The step outcome
Gets the outcome for a particular named target. May be an aggregate or node name.
The job to check
Graph for the job
Target to find an outcome for
The step outcome
Gets the job step for a particular node
The job to search
The node ref
Receives the jobstep on success
True if the jobstep was founds
Gets a dictionary that maps objects to their associated
objects on a .
The job document
Map of to
Find the latest step executing the given node
The job being run
Node to find
The retried step information
Gets the estimated timing info for all nodes in the job
The job document
Graph for this job
Job timing information
Logger for any diagnostic messages
Map of node to expected timing info
Gets the average wait time for this batch
Graph for the job
The batch to get timing info for
The job timing information
Logger for diagnostic info
Wait time for the batch
Gets the average initialization time for this batch
Graph for the job
The batch to get timing info for
The job timing information
Logger for diagnostic messages
Initialization time for this batch
Creates a nullable timespan from a nullable number of seconds
The number of seconds to construct from
TimeSpan object
Attempts to get a batch with the given id
The job document
The batch id
The step id
On success, receives the step object
True if the batch was found
Finds the set of nodes affected by a label
The job document
Graph definition for the job
Index of the label. -1 or Graph.Labels.Count are treated as referring to the default lable.
Set of nodes affected by the given label
Create a list of aggregate responses, combining the graph definitions with the state of the job
The job document
Graph definition for the job
List to receive all the responses
The default label state
Get the states of all labels for this job
The job to get states for
The graph for this job
Collection of label states by label index
Get the states of all UGS badges for this job
The job to get states for
The graph for this job
List of badge states
Get the states of all UGS badges for this job
The job to get states for
The graph for this job
The existing label states to get the UGS badge states from
List of badge states
Gets the state of a job, as a label that includes all steps
The job to query
Map from node to step
Receives the state of the label
Receives the outcome of the label
Gets the state of a label
Nodes to include in this label
Map from node to step
Receives the state of the label
Receives the outcome of the label
Gets a key attached to all artifacts produced for a job
Gets a key attached to all artifacts produced for a job step
Stores information about a batch of job steps
Job that this batch belongs to
Unique id for this group
The type of agent to execute this group
The log file id for this batch
The node group for this batch
Index of the group being executed
The state of this group
Error associated with this group
Steps within this run
The pool that this agent was taken from
The agent assigned to execute this group
The agent session that is executing this group
The lease that's executing this group
The weighted priority of this batch for the scheduler
Time at which the group became ready (UTC).
Time at which the group started (UTC).
Time at which the group finished (UTC)
Extension methods for IJobStepBatch
Attempts to get a step with the given id
The batch to search
The step id
On success, receives the step object
True if the step was found
Determines if new steps can be appended to this batch. We do not allow this after the last step has been completed, because the agent is shutting down.
The batch to search
True if new steps can be appended to this batch
Gets the wait time for this batch
The batch to search
Wait time for the batch
Gets the initialization time for this batch
The batch to search
Initialization time for this batch
Get the dependencies required for this batch to start, taking run-early nodes into account
The batch to search
List of node groups
Set of nodes that must have completed for this batch to start
Get the dependencies required for this batch to start, taking run-early nodes into account
Nodes in the batch to search
List of node groups
Set of nodes that must have completed for this batch to start
Embedded jobstep document
Job that this step belongs to
Batch that this step belongs to
Unique ID assigned to this jobstep. A new id is generated whenever a jobstep's order is changed.
The node for this step
Index of the node which this jobstep is to execute
The name of this node
References to inputs for this node
List of output names
Indices of nodes which must have succeeded for this node to run
Indices of nodes which must have completed for this node to run
Whether this node can be run multiple times
This node can start running early, before dependencies of other nodes in the same group are complete
Whether to include warnings in the output (defaults to true)
List of credentials required for this node. Each entry maps an environment variable name to a credential in the form "CredentialName.PropertyName".
Annotations for this node
Current state of the job step. This is updated automatically when runs complete.
Current outcome of the jobstep
Error from executing this step
The log id for this step
Unique id for notifications
Time at which the batch transitioned to the ready state (UTC).
Time at which the batch transitioned to the executing state (UTC).
Time at which the run finished (UTC)
Override for the priority of this step
If a retry is requested, stores the name of the user that requested it
Signal if a step should be aborted
If an abort is requested, stores the id of the user that requested it
Optional reason for why the job step was canceled
List of reports for this step
Reports for this jobstep.
Extension methods for job steps
Determines if a jobstep state is completed, skipped, or aborted.
True if the step is completed, skipped, or aborted
Determines if a jobstep is done by checking to see if it is completed, skipped, or aborted.
True if the step is completed, skipped, or aborted
Determine if a step should be timed out
Cumulative timing information to reach a certain point in a job
Wait time on the critical path
Sync time on the critical path
Duration to this point
Average wait time to this point
Average sync time to this point
Average duration to this point
Individual step timing information
Constructor
Copy constructor
The timing info object to copy from
Modifies this timing to wait for another timing
The other node to wait for
Waits for all the given timing info objects to complete
Other timing info objects to wait for
Constructs a new TimingInfo object which represents the last TimingInfo to finish
TimingInfo objects to wait for
New TimingInfo instance
Copies this info to a repsonse object
Information about a chained job trigger
The target to monitor
The template to trigger on success
The triggered job id
Whether to run the latest change, or default change for the template, when starting the new job. Uses same change as the triggering job by default.
Report for a job or jobstep
Name of the report
Where to render the report
The artifact id
Inline data for the report
Implementation of IReport
Identifier for a job
Id to construct from
Identifier for a job
Id to construct from
Id to construct from
Constant value for an empty job id
Converter to and from instances.
State of the job
Waiting for resources
Currently running one or more steps
All steps have completed
Read-only interface for job options
Name of the executor to use
Whether to execute using Wine emulation on Linux
Executes the job lease in a separate process
What workspace materializer to use in WorkspaceExecutor. Will override any value from workspace config.
Options for executing a job inside a container
Number of days after which to expire jobs
Name of the driver to use
Options for executing a job
Merge defaults from another options object
Options for a job container
Whether to execute job inside a container
Image URL to container, such as "quay.io/podman/hello"
Container engine executable (docker or with full path like /usr/bin/podman)
Additional arguments to pass to container engine
Options for executing a job inside a container
Whether to execute job inside a container
Image URL to container, such as "quay.io/podman/hello"
Container engine executable (docker or with full path like /usr/bin/podman)
Additional arguments to pass to container engine
Merge defaults from another options object
Query selecting the base changelist to use
Constructor
Constructor
Parameters required to create a job
The stream that this job belongs to
The template for this job
Name of the job
The changelist number to build. Can be null for latest.
The changelist number to build. Can be null for latest.
Parameters to use when selecting the change to execute at.
List of change queries to evaluate
The preflight changelist number
The preflight commit
Job options
Priority for the job
Whether to automatically submit the preflighted change on completion
Whether to update issues based on the outcome of this job
Values for the template parameters
Arguments for the job
Additional arguments for the job
Targets for the job. Will override any parameters specified in the Arguments or Parameters section if specified.
Private constructor for serialization
Response from creating a new job
Unique id for the new job
Constructor
Unique id for the new job
Updates an existing job
New name for the job
New priority for the job
Set whether the job should be automatically submitted or not
Mark this job as aborted
Optional reason the job was canceled
New list of arguments for the job. Only -Target= arguments can be modified after the job has started.
Placement for a job report
On a panel of its own
In the summary panel
Information about a report associated with a job
Name of the report
Report placement
The artifact id
Content for the report
Constructor
Information about a job
Unique Id for the job
Name of the job
Unique id of the stream containing this job
The changelist number to build
The commit to build
The code changelist
The code commit to build
The preflight changelist number
The preflight commit
Description of the preflight
The template type
Hash of the actual template data
Hash of the graph for this job
The user that started this job [DEPRECATED]
The user that started this job [DEPRECATED]
The user that started this job
Bisection task id that started this job
The user that aborted this job [DEPRECATED]
The user that aborted this job
Optional reason the job was canceled
Priority of the job
Whether the change will automatically be submitted or not
The submitted changelist number
Message produced by trying to auto-submit the change
Time that the job was created
The global job state
Array of jobstep batches
List of labels
The default label, containing the state of all steps that are otherwise not matched.
List of reports
Artifacts produced by this job
Parameters for the job
Command line arguments for the job
Additional command line arguments for the job for when using the parameters block
Custom list of targets for the job
The last update time for this job
Whether to use the V2 artifacts endpoint
Whether issues are being updated by this job
Whether the current user is allowed to update this job
Constructor
Default constructor needed for JsonSerializer
Response describing an artifact produced during a job
Identifier for this artifact, if it has been produced
Name of the artifact
Artifact type
Description to display for the artifact on the dashboard
Keys for the artifact
Metadata for the artifact
Step producing the artifact
Response describing an artifact produced during a job
Identifier for this artifact, if it has been produced
Name of the artifact
Artifact type
Description to display for the artifact on the dashboard
Keys for the artifact
Metadata for the artifact
Step producing the artifact
Identifier for this artifact, if it has been produced
Name of the artifact
Artifact type
Description to display for the artifact on the dashboard
Keys for the artifact
Metadata for the artifact
Step producing the artifact
The timing info for a job
The job response
Timing info for each step
Timing information for each label
Constructor
The job response
Timing info for each steps
Timing info for each label
The timing info for
Timing info for each job
Constructor
Timing info for each job
Request used to update a jobstep
The new jobstep state
Outcome from the jobstep
If the step has been requested to abort
Optional reason the job step was canceled
Specifies the log file id for this step
Whether the step should be re-run
New priority for this step
Properties to set. Any entries with a null value will be removed.
Response object when updating a jobstep
If a new step is created (due to specifying the retry flag), specifies the batch id
If a step is retried, includes the new step id
Reference to the output of a step within the job
Step producing the output
Index of the output from this step
Reference to the output of a step within the job
Step producing the output
Index of the output from this step
Step producing the output
Index of the output from this step
Returns information about a jobstep
The unique id of the step
Index of the node which this jobstep is to execute
The name of this node
Whether this node can be run multiple times
This node can start running early, before dependencies of other nodes in the same group are complete
Whether to include warnings in the output (defaults to true)
References to inputs for this node
List of output names
Indices of nodes which must have succeeded for this node to run
Indices of nodes which must have completed for this node to run
List of credentials required for this node. Each entry maps an environment variable name to a credential in the form "CredentialName.PropertyName".
Annotations for this node
Current state of the job step. This is updated automatically when runs complete.
Current outcome of the jobstep
Error describing additional context for why a step failed to complete
If the step has been requested to abort
Name of the user that requested the abort of this step [DEPRECATED]
The user that requested this step be run again
Optional reason the job step was canceled
Name of the user that requested this step be run again [DEPRECATED]
The user that requested this step be run again
The log id for this step
Time at which the batch was ready (UTC).
Time at which the batch started (UTC).
Time at which the batch finished (UTC)
List of reports
User-defined properties for this jobstep.
The state of a particular run
Waiting for dependencies of at least one jobstep to complete
Ready to execute
Preparing to execute work
Executing work
Preparing to stop
All steps have finished executing
Error code for a batch not being executed
No error
The stream for this job is unknown
The given agent type for this batch was not valid for this stream
The pool id referenced by the agent type was not found
There are no agents in the given pool currently online
There are no agents in this pool that are onlinbe
Unknown workspace referenced by the agent type
Cancelled
Lost connection with the agent machine
Lease terminated prematurely but can be retried.
An error ocurred while executing the lease. Cannot be retried.
The change that the job is running against is invalid
Step was no longer needed during a job update
Syncing the branch failed
Legacy alias for
Request to update a jobstep batch
The new log file id
The state of this batch
Information about a jobstep batch
Unique id for this batch
Index of the group being executed
The agent type
The agent assigned to execute this group
Rate for using this agent (per hour)
The agent session holding this lease
The lease that's executing this group
The unique log file id
The state of this batch
Error code for this batch
The priority of this batch
Time at which the group started (UTC).
Time at which the group finished (UTC)
Time at which the group became ready (UTC).
Steps within this run
State of an aggregate
Aggregate is not currently being built (no required nodes are present)
Steps are still running
All steps are complete
Outcome of an aggregate
Aggregate is not currently being built
A step dependency failed
A dependency finished with warnings
Successful
State of a label within a job
Name to show for this label on the dashboard
Category to show this label in on the dashboard
Name to show for this label in UGS
Project to display this label for in UGS
State of the label
Outcome of the label
Steps to include in the status of this label
Information about the default label (ie. with inlined list of nodes)
List of nodes covered by default label
Information about the timing info for a particular target
Wait time on the critical path
Sync time on the critical path
Duration to this point
Average wait time by the time the job reaches this point
Average sync time to this point
Average duration to this point
Information about the timing info for a particular target
Name of this node
Average wait time for this step
Average init time for this step
Average duration for this step
Information about the timing info for a label
Name of the label
Category for the label
Name of the label
Category for the label
Name of the label
Category for the label
Describes the history of a step
The job id
The batch containing the step
The step identifier
The change number being built
The commit being built
The step log id
The pool id
The agent id
Outcome of the step, once complete.
The issues which affected this step
Time at which the step started.
Time at which the step finished.
Identifier for a job step batch
Id to construct from
Identifier for a job step batch
Id to construct from
Id to construct from
Constructor
Creates a new
Converter to and from instances.
Systemic error codes for a job step failing
No systemic error
Step did not complete in the required amount of time
Step is in is paused state so was skipped
Step did not complete because the batch exited
Identifier for a jobstep
Id to construct from
Identifier for a jobstep
Id to construct from
Id to construct from
Constructor
Creates a new
Converter to and from instances.
Outcome for a jobstep
Outcome is not known
Step failed
Step completed with warnings
Step succeeded
Unique id struct for JobStepRef objects. Includes a job id, batch id, and step id to uniquely identify the step.
The job id
The batch id within the job
The step id
Constructor
The job id
The batch id within the job
The step id
Parse a job step id from a string
Text to parse
The parsed id
Formats this id as a string
Formatted id
State of a job step
Unspecified
Waiting for dependencies of this step to complete (or paused)
Ready to run, but has not been scheduled yet
Dependencies of this step failed, so it cannot be executed
There is an active instance of this step running
This step has been run
This step started to execute, but was aborted
Priority of a job or step
Not specified
Lowest priority
Below normal priority
Normal priority
Above normal priority
High priority
Highest priority
Schedule for a template
Whether the schedule should be enabled
Maximum number of builds that can be active at once
Maximum number of changes the schedule can fall behind head revision. If greater than zero, builds will be triggered for every submitted changelist until the backlog is this size.
Whether the build requires a change to be submitted
Gate allowing the schedule to trigger
Commit tags for this schedule
Roles to impersonate for this schedule
Last changelist number that this was triggered for
Gets the last trigger time, in UTC
List of jobs that are currently active
Patterns for starting this scheduled job
Files that should cause the job to trigger
Parameters for the template
Claim granted to a schedule
The claim type
The claim value
Required gate for starting a schedule
The template containing the dependency
Target to wait for
Pattern for executing a schedule
Days of the week to run this schedule on. If null, the schedule will run every day.
Time during the day for the first schedule to trigger. Measured in minutes from midnight.
Time during the day for the last schedule to trigger. Measured in minutes from midnight.
Interval between each schedule triggering
Extension methods for schedules
Gets the next trigger time for a schedule
Get the next time that the schedule will trigger
Schedule to query
Last time at which the schedule triggered
Timezone to evaluate the trigger
Next time at which the schedule will trigger
Calculates the trigger index based on the given time in minutes
Pattern to query
Time for the last trigger
The timezone for running the schedule
Index of the trigger
Time of day value for a schedule
Time of day value for a schedule
Parse a string as a time of day
Response describing a schedule
Whether the schedule is currently enabled
Maximum number of scheduled jobs at once
Maximum number of changes the schedule can fall behind head revision. If greater than zero, builds will be triggered for every submitted changelist until the backlog is this size.
Whether the build requires a change to be submitted
Gate for this schedule to trigger
Which commits to run this job for
Parameters for the template
New patterns for the schedule
Last changelist number that this was triggered for
Last changelist number that this was triggered for
Last time that the schedule was triggered
Next trigger times for schedule
List of active jobs
Default constructor
Constructor
Schedule to construct from
The scheduler time zone
Gate allowing a schedule to trigger.
The template containing the dependency
Target to wait for
Default constructor
Constructor
Parameters to create a new schedule
Days of the week to run this schedule on. If null, the schedule will run every day.
Time during the day for the first schedule to trigger. Measured in minutes from midnight.
Time during the day for the last schedule to trigger. Measured in minutes from midnight.
Interval between each schedule triggering
Constructor
Constructor
Response describing when a schedule is expected to trigger
Next trigger times
Constructor
List of trigger times
Time of day value for a schedule
Time of day value for a schedule
Parse a string as a time of day
Identifier for subresources. Assigning unique ids to subresources prevents against race conditions using indices when subresources are added and removed.
Subresource identifiers are stored as 16-bit integers formatted as a 4-digit hex code, in order to keep URLs short. Calling Next() will generate a new
identifier with more entropy than just incrementing the value but an identical period before repeating, in order to make URL fragments more distinctive.
The unique identifier value
Constructor
New identifier for this subresource
Creates a new random subresource id. We use random numbers for this to increase distinctiveness.
New subresource id
Updates the current value, and returns a copy of the previous value.
New subresource identifier
Parse a subresource id from a string
Text to parse
New subresource id
Attempt to parse a subresource id from a string
Text to parse
Receives the parsed subresource id on success
True if the id was parsed correctly
Converts this identifier to a string
String representation of this id
Equality operator for identifiers
First identifier to compare
Second identifier to compare
True if the identifiers are equal
Inequality operator for identifiers
First identifier to compare
Second identifier to compare
True if the identifiers are equal
Extension methods for manipulating subresource ids
Parse a string as a subresource identifier
Text to parse
The new subresource identifier
Type converter from strings to SubResourceId objects
Base class for converting to and from types containing a . Useful pattern for reducing boilerplate with strongly typed records.
Converts a type to a
Constructs a type from a
Attribute declaring a for a particular type
The converter type
Constructor
Class which serializes types with a to Json
Class which serializes types with a to Json
Creates constructors for types with a to Json
Document describing a job template. These objects are considered immutable once created and uniquely referenced by hash, in order to de-duplicate across all job runs.
Hash of this template
Name of the template.
Description for the template
Priority of this job
Whether to allow preflights for this job type
Whether to always issues for jobs using this template
Whether to promote issues by default for jobs using this template
Agent type to use for parsing the job state
Path to a file within the stream to submit to generate a new changelist for jobs
Description for new changelists
Optional predefined user-defined properties for this job
Parameters for this template
Base class for parameters used to configure templates via the new build dialog
Gets the arguments for a job given a set of parameters
Map of parameter id to value
Whether this is a scheduled build
Receives command line arguments for the job
Gets the default arguments for this parameter and its children
List of default parameters
Whether the arguments are being queried for a scheduled build
Allows the user to toggle an option on or off
Identifier for this parameter
Label to display next to this parameter.
Argument to add if this parameter is enabled
Arguments to add if this parameter is enabled
Argument to add if this parameter is disabled
Arguments to add if this parameter is disabled
Whether this option should be enabled by default
Whether this option should be enabled by default
Tool tip text to display
Free-form text entry parameter
Identifier for this parameter
Label to display next to this parameter. Should default to the parameter name.
Argument to add (will have the value of this field appended)
Default value for this argument
Override for this argument in scheduled builds.
Hint text to display when the field is empty
Regex used to validate values entered into this text field.
Message displayed to explain valid values if validation fails.
Tool tip text to display
Style of list parameter
Regular drop-down list. One item is always selected.
Drop-down list with checkboxes
Tag picker from list of options
Allows the user to select a value from a constrained list of choices
Label to display next to this parameter.
Style of picker parameter to use
List of values to display in the list
Tool tip text to display
Possible option for a list parameter
Identifier for this parameter
Group to display this entry in
Text to display for this option.
Argument to add if this parameter is enabled.
Arguments to add if this parameter is enabled.
Argument to add if this parameter is disabled.
Arguments to add if this parameter is disabled.
Whether this item is selected by default
Whether this item is selected by default
Extension methods for templates
Gets the full argument list for a template
Gets the arguments for default options in this template. Does not include the standard template arguments.
List of default arguments
Query selecting the base changelist to use
Name of this query, for display on the dashboard.
Condition to evaluate before deciding to use this query. May query tags in a preflight.
The template id to query
The target to query
Whether to match a job that produced warnings
Finds the last commit with this tag
Interface for a collection of template documents
Gets a template by ID
Unique id of the template
The template document
Identifier for a template parameter
Id to construct from
Identifier for a template parameter
Id to construct from
Id to construct from
Constructor
Converter to and from instances.
Identifier for a job template
Id to construct from
Identifier for a job template
Id to construct from
Id to construct from
Constructor
Converter to and from instances.
Response describing a template
Name of the template
Description for the template
Default priority for this job
Whether to allow preflights of this template
Whether to always update issues on jobs using this template
The initial agent type to parse the BuildGraph script on
Path to a file within the stream to submit to generate a new changelist for jobs
Parameters for the job.
List of parameters for this template
Parameterless constructor for serialization
Constructor
The template to construct from
Response describing a template
Unique id of the template
Parameterless constructor for serialization
Constructor
The template to construct from
Base class for template parameters
Allows the user to toggle an option on or off
Default constructor
Constructor
Free-form text entry parameter
Default constructor
Constructor
Allows the user to select a value from a constrained list of choices
Default constructor
Constructor
Possible option for a list parameter
Default constructor
Constructor
Test outcome
The test was successful
The test failed
The test was skipped
The test had an unspecified result
Response object describing test data to store
The job which produced the data
The step that ran
Key used to identify the particular data
The data stored for this test
Response object describing the created document
The id for the new document
Constructor
Id of the new document
Response object describing test results
Unique id of the test data
Stream that generated the test data
The template reference id
The job which produced the data
The step that ran
The changelist number that contained the data
The changelist number that contained the data
Key used to identify the particular data
The data stored for this test
A test emvironment running in a stream
Meta unique id for environment
The platforms in the environment
The build configurations being tested
The build targets being tested
The test project name
The rendering hardware interface being used with the test
The varation of the test meta data, for example address sanitizing
A test that runs in a stream
The id of the test
The name of the test
The name of the test
The name of the test suite
The meta data the test runs on
Get tests request
Test ids to get
A test suite that runs in a stream, contain subtests
The id of the suite
The name of the test suite
The meta data the test suite runs on
Response object describing test results
The stream id
Individual tests which run in the stream
Test suites that run in the stream
Test suites that run in the stream
Suite test data
The test id
The ourcome of the suite test
How long the suite test ran
Test UID for looking up in test details
The number of test warnings generated
The number of test errors generated
Test details
The corresponding test ref
The test documents for this ref
Suite test data
Data ref
The test ref id
The associated stream
The associated job id
The associated step id
How long the test ran
The build changelist upon which the test ran, may not correspond to the job changelist
The build changelist upon which the test ran, may not correspond to the job changelist
The platform the test ran on
The test id in stream
The outcome of the test
The if of the stream test suite
Suite tests skipped
Suite test warnings
Suite test errors
Suite test successes
Timing information for a particular job
Gets timing information for a particular step
Name of the node being executed
Logger for diagnostic messages
Receives the timing information for the given step
True if the timing was found
Information about the timing for an individual step
Wait time before executing the group containing this node
Time taken for the group containing this node to initialize
Time spent executing this node
Interface for a log device
Flushes the logger with the server and stops the background work
Extension methods for
Creates a logger which uploads data to the server
Logger for the server
Local log output device
New logger instance
Read-only buffer for log text, with indexed line offsets.
Type of this blob when serialized
Provides access to the lines for this chunk through a list interface
Empty log chunk
The raw text data. Contains a complete set of lines followed by newline characters.
Span for the raw text data.
Accessor for the lines in this chunk
Offsets of lines within the data object, including a sentinel for the end of the data (LineCount + 1 entries).
Length of this chunk
Number of lines in the block (excluding the sentinel).
Default constructor
Constructor
Data to construct from
Constructor
Accessor for an individual line
Index of the line to retrieve
Line at the given index
Accessor for an individual line, including the trailing newline character
Index of the line to retrieve
Line at the given index
Find the line index for a particular offset
Offset within the text
The line index
Creates a new list of line offsets for the given text
Updates the length of this chunk, computing all the newline offsets
Text to search for line endings
Start offset within the text buffer
Offsets of each line within the text
Converter from log chunks to blobs
Reference to a chunk of text, with information about its placement in the larger log file
First line within the file
Number of lines in this block
Offset within the entire log file
Length of this chunk
Handle to the target chunk
Constructor
Index of the first line within this block
Number of lines in the chunk
Offset within the log file
Length of the chunk
Referenced log text
Deserializing constructor
Builder for objects.
Accessor for Data
Current used length of the buffer
Offsets of the start of each line within the data
Current length of the buffer
Number of lines in this buffer
Capacity of the buffer
Constructor
Constructor
Data to initialize this chunk with. Ownership of this array is transferred to the chunk, and its length determines the chunk size.
Number of valid bytes within the initial data array
Constructor
Clear the contents of the buffer
Gets a line at the given index
Index of the line
Text for the line
Create a new chunk data object with the given data appended. The internal buffers are reused, with the assumption that
there is no contention over writing to the same location in the chunk.
The data to append
New chunk data object
Appends JSON text from another buffer as plain text in this one
Appends JSON text from another buffer as plain text in this one
Ensure there is a certain amount of space in the output buffer
Required space
Determines if the given line is empty
The input data
True if the given text is empty
Converts a JSON log line to plain text
The JSON data
Output buffer for the converted line
Offset within the buffer to write the converted data
Unescape a json utf8 string
Source span of bytes
Target span of bytes
Length of the converted data
Shrinks the data allocated by this chunk to the minimum required
Create an array of lines from the text
Array of lines
Create a object from the current state
Builds a sequence of log chunks
Desired size for each chunk
The complete chunks. Note that this does not include data which has not yet been flushed.
Total length of the sequence
Number of lines in this builder
Constructor
Desired size for each chunk. Each chunk will be limited to this size.
Clear the current contents of the buffer
Remove a number of chunks from the start of the builder
Number of chunks to remove
Flushes the current contents of the builder
Enumerate lines starting at the given index
Index to start from
Sequence of lines
Flushes the current chunk if necessary to provide the requested space
Space required in
Extension methods for ILogText
Gets the chunk index containing the given offset.
The chunks to search
The offset to search for
The chunk index containing the given offset
Gets the starting chunk index for the given line
The chunks to search
Index of the line to query
Index of the chunk to fetch
Converts a log text instance to plain text
The text to convert
Logger for conversion warnings
The plain text instance
Severity of a log event
Severity is not specified
Information severity
Warning severity
Error severity
Information about an uploaded event
Unique id of the log containing this event
Severity of this event
Index of the first line for this event
Number of lines in the event
The issue id associated with this event
The structured message data for this event
Identifier for a log
Id to construct from
Identifier for a log
Id to construct from
Id to construct from
Constant value for empty user id
Converter to and from instances.
Index for a log file.
The index consists of a sequence of compressed, plain text blocks (see , and a
set of (ngram, block index) pairs encoded as 64-bit integers (see ).
Each ngram is a 1-4 byte sequence of utf8 bytes, padded out to 32-bits (see ).
When performing a text search, the search term is split into a set of ngrams, and the set queried for blocks containing
them all. Matching blocks are decompressed and scanned for matches using a simplified Knuth-Morris-Pratt search.
Since alignment of ngrams may not match alignment of ngrams in the search term, we offset the search term by
1-4 bytes and include the union of blocks matching at any offset.
Index for tokens into the block list
Number of bits in the index devoted to the block index
List of text blocks
Empty index data
Public accessor for the plain text chunks
Number of lines covered by the index
Length of the text data
Constructor
Index into the text chunks
Number of bits devoted to the chunks index
Plain text chunks for this log file
Appends a set of text blocks to this index
Writer for output nodes
Text blocks to append
New log index with the given blocks appended
Search for the given text in the index
First line index to search from
Text to search for
Receives stats for the search
Cancellation token for the operation
List of line numbers for the text
Search for the given text in the index
Text to search for
The first
List of line numbers for the text
Gets predicates for matching a token that starts
The token text
Whether to allow a partial match of the token
Set of aligned tokens that are required
List of predicates for the search
Generates a predicate for matching a token which may or may not start on a regular token boundary
The token text
Whether to allow a partial match of the token
List of predicates for the search
Tests whether a chunk contains a particular token
Index of the chunk to search
The token to test
Mask of which bits in the token are valid
True if the given block contains a token
Tests whether a chunk contains a particular ngram
The token to test
Offset of the window into the token to test
Whether to allow a partial match of the token
True if the given block contains a token
Type for serializing index nodes
Version number for serialized data
The type of data stored in this log file
Plain text data
Structured json objects, output as one object per line (without trailing commas)
Creates a new log file
Type of the log file
Response from creating a log file
Identifier for the created log file
Response describing a log file
Unique id of the log file
Unique id of the job for this log file
The lease allowed to write to this log
The session allowed to write to this log
Type of events stored in this log
Number of lines in the file
Response describing a log file
List of line numbers containing the search text
Stats for the search
Response when querying for specific lines from a log file
start index of the lines returned
Number of lines returned
Last index of the returned messages
Type of response, Json or Text
List of lines received
Response object for individual lines
Timestamp for log line
Level of Message (Information, Warning, Error)
Message itself
Format string for the message
User-defined properties for this jobstep.
Format for the log file
Text data
Json data
Represents an entire log
Format for this log file
Total number of lines
Length of the log file
Text blocks for this chunk
Index for this log
Whether this log is complete
Deserializing constructor
Serializer for types
Type of blob when serialized to storage
Assists building log files through trees of , and nodes. This
class is designed to be thread safe, and presents a consistent view to readers and writers.
Default maximum size for a log text block
Default maximum size for an index text block
Number of lines written to the log
Constructor
Format for data in the log file
Constructor
Format of data in the log file
maximum size for a regular text block
Maximum size for an index text block
Logger for conversion errors
Read data from the unflushed log tail
The first line to read, from the end of the flushed data
Append JSON data to the end of the log
Log data to append
Flushes the written data to the log
Writer for the output nodes
Whether the log is complete
Cancellation token for the operation
Extension methods
Reads lines from a line
Log to read from
Cancellation token
Sequence of line buffers
Reads lines from a line
Log to read from
Zero-based index of the first line to read from
Cancellation token
Sequence of line buffers
Functionality for decomposing log text into ngrams.
Maximum number of bytes in each ngram
Number of bits in each ngram
Lookup from input byte to token type
Lookup from input byte to token char
Gets a single token
The text to parse
The token value
Decompose a span of text into tokens
Text to scan
Receives a set of tokens
Decompose a string to a set of ngrams
Text to scan
Gets the length of the first token in the given span
The text to search
Start position for the search
Length of the first token
Gets the length of the first token in the given span
The text to search
Offset of the window to read from the token
Length of the first token
Gets the length of the first token in the given span
The text to search
Offset of the window to read from the token
Whether to allow only matching the start of the string
Length of the first token
Build the lookup table for token types
Array whose elements map from an input byte to token type
Build the lookup table for token types
Array whose elements map from an input byte to token type
A sparse, space-efficient set of 64-bit values. Implemented as a trie backed by a flat lookup table.
Each 64-bit value in the set is decomposed into 4-bit fragments, and each node in the trie contains a 2^4=16-bit mask
indicating which child nodes exist. The array of nodes is stored in a flat buffer, with a predictable order, with
the children of a particular node stored contiguously, breadth first.
Doing so allows constructing a lookup table for the first child of each parent node with a single pass of
the buffer, allowing efficient traversal of the tree to satisfy queries.
In practice, only the top 32-bits of values stored in the trie are used for encoding ngram information. The
bottom 32 bits are used to index a block number, allowing querying the existence of ngrams and their
rough location.
Stack item for traversing the tree
The current node index
Value in the current node (0-15)
Delegate for filtering values during a tree traversal
The current value
Mask for which bits in the value are valid
True if values matching the given mask should be enumerated
Height of the tree
Array of bitmasks for each node in the tree
Array of bitmasks for each node in the tree
Array of child offsets for each node. Excludes the last layer of the tree.
Empty index definition
Constructor
Node data
Tests whether the given value is in the trie
The value to check for
True if the value is in the trie
Enumerate all values matching a given filter
Predicate for which values to include
Values satisfying the given predicate
Enumerates all values in the trie between the given ranges
Minimum value to enumerate
Maximum value to enumerate
Sequence of values
Creates a lookup for child node offsets from raw node data
Array of masks for each node
Array of offsets
Count the number of set bits in the given value
Value to test
Number of set bits
Read a trie from the given buffer
Reader to read from
New trie
Write this trie to the given buffer
Writer to output to
Gets the serialized size of this trie
Extension methods for serializing tries
Read a trie from the given buffer
Reader to read from
New trie
Write this trie to the given buffer
Writer to output to
Trie to write
Structure used for building compact instances
Node within the trie
The root node
Default constructor
Adds a value to the trie
Value to add
Searches for the given item in the trie
Value to add
Creates a from this data
Stats for a search
Number of blocks that were scanned
Number of bytes that had to be scanned for results
Number of blocks that were skipped
Number of blocks that had to be decompressed
Number of blocks that were searched but did not contain the search term
Stores cached information about a utf8 search term
The search text
The utf-8 bytes to search for
Normalized (lowercase) utf-8 bytes to search for
Skip table for comparisons
Constructor
The text to search for
Find all ocurrences of the text in the given buffer
The buffer to search
The text to search for
Sequence of offsets within the buffer
Perform a case insensitive search for the next occurerence of the search term in a given buffer
The buffer to search
Starting offset for the search
The text to search for
Offset of the next occurence, or -1
Compare the search term against the given buffer
The buffer to search
Starting offset for the search
The text to search for
True if the text matches, false otherwise
Stores cached information about a utf8 search term
Find all ocurrences of the text in the given buffer
The buffer to search
The text to search for
Sequence of offsets within the buffer
Perform a case sensitive search for the next occurerence of the search term in a given buffer
The buffer to search
Starting offset for the search
The text to search for
Offset of the next occurence, or -1
Compare the search term against the given buffer
The buffer to search
Starting offset for the search
The text to search for
True if the text matches, false otherwise
Class to handle uploading log data to the server in the background
Constructor
Horde instance to write to
The log id to write to
Minimum level for output
Logger for systemic messages
Stops the log writer's background task
Async task
Dispose of this object. Call StopAsync() to stop asynchronously.
Upload the log data to the server in the background
Async task
Utility class to split log events into separate lines and buffer them for writing to the server
Current packet length
Maximum length of an individual line
Maximum size of a packet
Constructor
Maximum length for an individual line
Maximum length for a packet
Creates a packet from the current data
Packet data and number of lines written
Writes an event
Event to write
Writes an event with a format string, splitting it into multiple lines if necessary
Unique id for a notification trigger id
Identifier for the notification trigger
Unique id for a notification trigger id
Identifier for the notification trigger
Identifier for the notification trigger
Converter class to and from ObjectId values
Constructor
Handle to a Horde project
Unique id of the project
Name of the project
Order to display this project on the dashboard
List of streams that are in this project
Describes a stream within a project
The stream id
The stream name
Collection of projects
Retrieve information about a specific project
Id of the project to get information about
Cancellation token for the operation
Information about the requested project
Query all the projects
Cancellation token for the operation
Information about all the projects
Identifier for a pool
Id to construct from
Identifier for a pool
Id to construct from
Id to construct from
Constructor
Converter to and from instances.
Response describing a project
Unique id of the project
Name of the project
Order to display this project on the dashboard
List of streams that are in this project
List of stream categories to display
Constructor
Unique id of the project
Name of the project
Order to show this project on the dashboard
Information about a stream within a project
The stream id
The stream name
Constructor
Information about a category to display for a stream
Heading for this column
Index of the row to display this category on
Whether to show this category on the nav menu
Patterns for stream names to include
Patterns for stream names to exclude
Streams to include in this category
Constructor
Unique identifier for a replicator across all streams
Unique identifier for a replicator across all streams
Parse a replicator id
Parse a replicator id
Identifier for a replicator
Id to construct from
Identifier for a replicator
Id to construct from
Id to construct from
Constructor
Converter to and from instances.
Information about a replicator
Identifier for this replicator
Identifier for the stream
Identifier for this replicator within the stream
Status description
Whether to pause replication
Whether to perform a clean snapshot
Resets the replication
Pauses replication after the current change
The last change that was replicated
Time at which the last change was replicated
The current change being replicated
Time at which the current change was replicated
Size of data currently being replicated
Amount of data copied for the current change
Last error with replication, if there is one.
Information about a replicator
Whether to pause replication immediately
Whether to perform a clean snapshot for the next replicated change
Discards all replicated changes and starts replication from scratch
Pauses replication after one change has been replicated
Change that should be replicated. Setting this to a value ahead of the last replicated change will cause changes inbetween to be skipped.
Constructor
Information about a secret
Identifier for the secret
The secret values
Collection of secrets
Resolve a secret to concrete values
Identifier for the secret
Cancellation token for the operation
Identifier for a secret
Id to construct from
Identifier for a secret
Id to construct from
Id to construct from
Constructor
Converter to and from instances.
Response listing all the secrets available to the current user
List of secret ids
Constructor
Gets data for a particular secret
Id of the secret
Key value pairs for the secret
Constructor
Parameters required to create a notice
Start time to display this message
Finish time to display this message
Message to display
Constructor
Parameters required to update a notice
The id of the notice to update
Start time to display this message
Finish time to display this message
Message to display
Constructor
Notice information
The id of the notice to update
Start time to display this message
Finish time to display this message
Whether this notice is for scheduled downtime
Whether the notice is currently active
Message to display
User id who created the notice, otherwise null if a system message
Server Info
Current API version number of the server
Server version info
The current agent version string
List of plugins
Gets connection information to the server
Public IP address of the remote machine
Public port of the connecting machine
Gets ports configured for this server
Port for HTTP communication
Port number for HTTPS communication
Port number for unencrpyted HTTPS communication
Authentication method used for logging users in
No authentication enabled. *Only* for demo and testing purposes.
OpenID Connect authentication, tailored for Okta
Generic OpenID Connect authentication, recommended for most
Authenticate using username and password credentials stored in Horde
OpenID Connect (OIDC) is first and foremost recommended.
But if you have a small installation (less than ~10 users) or lacking an OIDC provider, this is an option.
Describes the auth config for this server
Issuer for tokens from the auth provider
Optional profile name used by OidcToken
Issuer for tokens from the auth provider
Client id for the OIDC authority
Optional redirect url provided to OIDC login for external tools (typically to a local server)
Request to validate server configuration with the given files replacing their checked-in counterparts.
Perforce cluster to retrieve from
Change to test
Response from validating config files
Whether the files were validated successfully
Output message from validation
Detailed response
Status for a subsystem within Horde
Name of the subsystem
List of updates
Type of status result for a single update
Indicates that the health check determined that the subsystem was unhealthy
Indicates that the health check determined that the component was in a subsystem state
Indicates that the health check determined that the subsystem was healthy
A single status update
Result of status update
Optional message describing the result
Time this update was created
Response from server status controller
List of subsystem statuses
Information about a server plugin
Name of the plugin
Optional description of the plugin
Whether the plugin is loaded
The version of the plugin assembly
Information about a server plugin
Name of the plugin
Optional description of the plugin
Whether the plugin is loaded
The version of the plugin assembly
Name of the plugin
Optional description of the plugin
Whether the plugin is loaded
The version of the plugin assembly
Identifier for a user account
Id to construct from
Identifier for a user account
Id to construct from
Id to construct from
Converter to and from instances.
Creates a new user account
Name of the account
Description for the account
Claims for the user
Whether the account is enabled
Creates a new user account
Name of the account
Description for the account
Claims for the user
Whether the account is enabled
Name of the account
Description for the account
Claims for the user
Whether the account is enabled
Response from the request to create a new user account
The created account id
Secret used to auth with this account
Response from the request to create a new user account
The created account id
Secret used to auth with this account
The created account id
Secret used to auth with this account
Update request for a user account
Name of the account
Description for the account
Claims for the user
Request that the token get reset
Whether the account is enabled
Update request for a user account
Name of the account
Description for the account
Claims for the user
Request that the token get reset
Whether the account is enabled
Name of the account
Description for the account
Claims for the user
Request that the token get reset
Whether the account is enabled
Response from updating a user account
Response from updating a user account
Creates a new user account
Id of the account
Claims for the user
Description for the account
Whether the account is enabled
Creates a new user account
Id of the account
Claims for the user
Description for the account
Whether the account is enabled
Id of the account
Claims for the user
Description for the account
Whether the account is enabled
Exception thrown when stream validation fails
Information about a stream
Name of the stream.
Project that this stream belongs to
Name of the stream
Path to the config file for this stream
Current revision of the config file
Order for this stream on the dashboard
Notification channel for all jobs in this stream
Notification channel filter for this template. Can be Success, Failure, or Warnings.
Channel to post issue triage notifications
Tabs for this stream on the dashboard
Agent types configured for this stream
Workspace types configured for this stream
List of templates available for this stream
Workflows configured for this stream
Default settings for preflights against this stream
Stream is paused for builds until specified time
Comment/reason for why the stream was paused
Commits for this stream
Get the latest stream state
Cancellation toke for this operation
Updated stream, or null if it no longer exists
Updates user-facing properties for an existing stream
The new datetime for pausing builds
The reason for pausing
Cancellation token for the operation
The updated stream if successful, null otherwise
Attempts to update the last trigger time for a schedule
The template ref id
New last trigger time for the schedule
New last trigger commit for the schedule
New list of active jobs
Cancellation token for the operation
The updated stream if successful, null otherwise
Attempts to update a stream template ref
The template ref to update
The stream states to update, pass an empty list to clear all step states, otherwise will be a partial update based on included step updates
Cancellation token for the operation
Style for rendering a tab
Regular job list
Omit job names, show condensed view
Information about a page to display in the dashboard for a stream
Title of this page
Type of this tab
Presentation style for this page
Whether to show job names on this page
Whether to show all user preflights
Names of jobs to include on this page. If there is only one name specified, the name column does not need to be displayed.
List of job template names to show on this page.
Columns to display for different types of aggregates
Type of a column in a jobs tab
Contains labels
Contains parameters
Describes a column to display on the jobs page
The type of column
Heading for this column
Category of aggregates to display in this column. If null, includes any aggregate not matched by another column.
Parameter to show in this column
Relative width of this column.
Mapping from a BuildGraph agent type to a set of machines on the farm
Pool of agents to use for this agent type
Name of the workspace to sync
Path to the temporary storage dir
Environment variables to be set when executing the job
Information about a workspace type
Name of the Perforce server cluster to use
The Perforce server and port (eg. perforce:1666)
User to log into Perforce with (defaults to buildmachine)
Password to use to log into the workspace
Identifier to distinguish this workspace from other workspaces. Defaults to the workspace type name.
Override for the stream to sync
Custom view for the workspace
Whether to use an incrementally synced workspace
Whether to use the AutoSDK
View for the AutoSDK paths to sync. If null, the whole thing will be synced.
Method to use when syncing/materializing data from Perforce
Minimum disk space that must be available *after* syncing this workspace (in megabytes)
If not available, the job will be aborted.
Threshold for when to trigger an automatic conform of agent. Measured in megabytes free on disk.
Set to null or 0 to disable.
Specifies defaults for running a preflight
The template id to query
Query for the change to use
Job template in a stream
The template id
Whether to show badges in UGS for these jobs
Whether to show alerts in UGS for these jobs
Notification channel for this template. Overrides the stream channel if set.
Notification channel filter for this template. Can be a combination of "Success", "Failure" and "Warnings" separated by pipe characters.
Triage channel for this template. Overrides the stream channel if set.
List of schedules for this template
List of chained job triggers
List of template step states
Default change to use for this job
Trigger for another template
Name of the target that needs to complete before starting the other template
Id of the template to trigger
Whether to use the default change for the template rather than the change for the parent job.
Information about a paused template step
Name of the step
User who paused the step
The UTC time when the step was paused
Extension methods for streams
Updates an existing stream
The stream to update
The new datetime for pausing builds
The reason for pausing
Cancellation token for the operation
Async task object
Attempts to update the last trigger time for a schedule
The stream to update
The template ref id
Jobs to add
Jobs to remove
Cancellation token for the operation
True if the stream was updated
Check if stream is paused for new builds
The stream object
Current time (allow tests to pass in a fake clock)
If stream is paused
Collection of stream documents
Gets a stream by ID
The stream identifier
Cancellation token for the operation
The stream document
Gets a stream by ID
The stream identifiers
Cancellation token for the operation
The stream document
Identifier for a stream
Id to construct from
Identifier for a stream
Id to construct from
Id to construct from
Constructor
Converter to and from instances.
Response describing a stream
Unique id of the stream
Unique id of the project containing this stream
Name of the stream
The config file path on the server
Revision of the config file
Order to display in the list
Notification channel for all jobs in this stream
Notification channel filter for this template. Can be a combination of "Success", "Failure" and "Warnings" separated by pipe characters.
Channel to post issue triage notifications
Default template for running preflights
Default template to use for preflights
List of tabs to display for this stream
Map of agent name to type
Map of workspace name to type
Templates for jobs in this stream
Stream paused for new builds until this date
Reason for stream being paused
Workflows for this stream
Constructor
The stream to construct from
Templates for this stream
Information about the default preflight to run
Constructor
Constructor
Information about a page to display in the dashboard for a stream
Constructor
Constructor
Describes a column to display on the jobs page
Default constructor
Constructor
Mapping from a BuildGraph agent type to a set of machines on the farm
Pool of agents to use for this agent type
Name of the workspace to sync
Path to the temporary storage dir
Environment variables to be set when executing the job
Constructor
Constructor
Information about a workspace type
Default constructor
Constructor
State information for a step in the stream
Name of the step
User who paused the step
The UTC time when the step was paused
Default constructor for serialization
Constructor
Information about a template in this stream
Id of the template ref
Hash of the template definition
Whether to show badges in UGS for these jobs
Whether to show alerts in UGS for these jobs
Notification channel for this template. Overrides the stream channel if set.
Notification channel filter for this template. Can be a combination of "Success", "Failure" and "Warnings" separated by pipe characters.
Triage channel for this template. Overrides the stream channel if set.
The schedule for this ref
List of templates to trigger
List of step states
List of queries for the default changelist
Whether the user is allowed to create jobs from this template
Constructor
The template ref id
The template ref
The actual template
The template step states
The scheduler time zone
Whether the user can run this template
Trigger for another template
Constructor
Constructor
Step state update request
Name of the step
User who paused the step
Updates an existing stream template ref
Step states to update
Normalized string identifier for a resource
Enum used to disable validation on string arguments
No validation required
Maximum length for a string id
The text representing this id
Accessor for the string bytes
Accessor for the string bytes
Constructor
Unique id for the string
Constructor
Unique id for the string
Constructor
Unique id for the string
Argument used for overload resolution for pre-validated strings
Checks whether this StringId is set
Generates a new string id from the given text
Text to generate from
New string id
Validates the given string as a StringId, normalizing it if necessary.
Text to validate as a StringId
Name of the parameter to show if invalid characters are returned.
Converts a utf8 string to lowercase
Checks whether the given character is valid within a string id
The character to check
True if the character is valid
Checks whether the given character is valid within a string id
The character to check
True if the character is valid
Compares two string ids for equality
The first string id
Second string id
True if the two string ids are equal
Compares two string ids for inequality
The first string id
Second string id
True if the two string ids are not equal
Class which serializes types
Class which serializes types
Base class for converting to and from types containing a . Useful pattern for reducing boilerplate with strongly typed records.
Converts a type to a
Constructs a type from a
Attribute declaring a for a particular type
The converter type
Constructor
Converter to compact binary objects
Class which serializes types with a to Json
Class which serializes types with a to Json
Creates constructors for types with a to Json
Identifier for a symbol store
Id to construct from
Identifier for a symbol store
Id to construct from
Id to construct from
Constructor
Converter to and from instances.
Provides functionality for hashing files to add to a Microsoft Symbol store.
* For PE-format files (EXE, DLL), this consists of the
* For PDB files, this consists of the PDB GUID followed by its age (the number of times it has been written).
Gets the hash of a file.
File to hash
Logger for diagnostic messages
Cancellation token for the operation
Hash of the file, or null if it cannot be parsed.
Gets the hash for a Windows portable executable file, consisting of concatenated hex values for the timestamp and image size fields in the header.
Structures referenced below (IMAGE_DOS_HEADER, IMAGE_NT_HEADERS) are defined in Windows headers, but are parsed via offsets into the executable
to reduce messy marshalling in C#.
Parse the hash value from a PDB file, consisting of concatenated hex values for the PDB GUID and age (the number of times it has been written).
While the fields to obtain are at fairly straightforward offsets in particular data structures, PDBs are internally structured as an MSF file
consisting of multiple data streams with non-contiguous pages. There are a few references for parsing this format:
* LLVM documentation: https://llvm.org/docs/PDB/MsfFile.html
* Microsoft's PDB source code: https://github.com/microsoft/microsoft-pdb (particularly PDB/msf/msf.cpp)
This function only handles the BigMSF format.
Identifier for a particular metric
Id to construct from
Identifier for a particular metric
Id to construct from
Id to construct from
Constructor
Converter to and from instances.
Generic message for a telemetry event
List of telemetry events
Indicates the type of telemetry data being uploaded
A batch of objects.
Metrics matching a particular query
The corresponding metric id
Metric grouping information
Metrics matching the search terms
Information about a particular metric
Start time for the sample
Name of the group
Value for the metric
The units used to present the telemetry
Time duration
Ratio 0-100%
Artbitrary numeric value
The type of
A line graph
Key performance indicator (KPI) chart with thrasholds
Metric attached to a telemetry chart
Associated metric id
The threshold for KPI values
The metric alias for display purposes
Telemetry chart configuraton
The name of the chart, will be displayed on the dashboard
The unit to display
The graph type
List of configured metrics
The min unit value for clamping chart
The max unit value for clamping chart
A chart categody, will be displayed on the dashbord under an associated pivot
The name of the category
The charts contained within the category
A telemetry view variable used for filtering the charting data
The name of the variable for display purposes
The associated data group attached to the variable
The default values to select
A telemetry view of related metrics, divided into categofies
Identifier for the view
The name of the view
The telemetry store this view uses
The variables used to filter the view data
The categories contained within the view
Identifier for a particular metric view
Id to construct from
Identifier for a particular metric view
Id to construct from
Id to construct from
Constructor
Converter to and from instances.
Metric attached to a telemetry chart
Associated metric id
The threshold for KPI values
The metric alias for display purposes
Telemetry chart configuraton
The name of the chart, will be displayed on the dashboard
The unit to display
The graph type
List of configured metrics
The min unit value for clamping chart
The max unit value for clamping chart
A chart categody, will be displayed on the dashbord under an associated pivot
The name of the category
The charts contained within the category
A telemetry view variable used for filtering the charting data
The name of the variable for display purposes
The associated data group attached to the variable
The default values to select
A telemetry view of related metrics, divided into categofies
Identifier for the view
The name of the view
The telemetry store the view uses
The variables used to filter the view data
The categories contained within the view
Identifier for a telemetry store
Id to construct from
Identifier for a telemetry store
Id to construct from
Id to construct from
Default telemetry store for Horde internal metrics
Constructor
Converter to and from instances.
Constructor
Describes a standalone, external tool hosted and deployed by Horde. Provides basic functionality for performing
gradual roll-out, versioning, etc...
Identifier for the tool
Name of the tool
Long-form description of the tool
Category for the tool on the dashboard
Grouping key for merging tool versions together on the dashboard
Supported platforms, as a NET runtime identifiers
Whether the tool is available to authenticated users
Whether the tool is bundled with the server
Whether to show the tool for download in UGS
Whether to show the tool for download in the dashboard
Whether to show the tool for download in the toolbox
User-defined metadata for this tool
Current deployments of this tool, sorted by time.
Authorize a user to perform a particular action
Action the user is trying to perform
Identity of the user trying to perform the action
Adds a new deployment to the given tool. The new deployment will replace the current active deployment.
Options for the new deployment
Stream containing the tool data
Cancellation token for the operation
Updated tool document, or null if it does not exist
Adds a new deployment to the given tool. The new deployment will replace the current active deployment.
Options for the new deployment
Path to the root node containing the tool data
Cancellation token for the operation
Updated tool document, or null if it does not exist
Gets the storage backend for this tool
Gets the storage namespace for this particular tool
Deployment of a tool
Identifier for this deployment. A new identifier will be assigned to each created instance, so an identifier corresponds to a unique deployment.
Descriptive version string for this tool revision
Current state of this deployment
Current progress of the deployment
Last time at which the progress started. Set to null if the deployment was paused.
Length of time over which to make the deployment
Namespace containing the tool
Reference to this tool in Horde Storage.
Handle to the tool data
Updates the state of the current deployment
New state of the deployment
Cancellation token for the operation
Opens a stream to the data for a particular deployment
Cancellation token for the operation
Stream for the data
Options for a new deployment
Whether to create the deployment in a paused state
Extension methods for tools
Gets the current deployment
Tool to query
Adoption phase for the caller. 0 which the
Current time
Get the progress fraction for a particular deployment and time
Collection of tools
Gets a tool with the given identifier
The tool identifier
Cancellation token for the operation
The requested tool, or null if it does not exist
Gets all the available tools
List of the available tools
Identifier for a tool deployment
Identifier for the artifact
Identifier for a tool deployment
Identifier for the artifact
Identifier for the artifact
Converter class to and from BinaryId values
Identifier for a tool
Id to construct from
Identifier for a tool
Id to construct from
Id to construct from
Constructor
Converter to and from instances.
Describes a standalone, external tool hosted and deployed by Horde. Provides basic functionality for performing
gradual roll-out, versioning, etc...
Unique identifier for the tool
Name of the tool
Description for the tool
Category to display the tool in on the dashboard
Grouping key to control how different tools should be merged on the dashboard
List of platforms that this tool supports, as NET runtime identifiers.
Current deployments of this tool, sorted by time.
Whether this tool should be exposed for download on a public endpoint without authentication
Whether this tool is bundled with the server
Whether to show this tool for download inside UGS
Whether to show this tool for download on the dashboard
Whether to show this tool for download in Unreal Toolbox
Metadata for the tool
Describes a standalone, external tool hosted and deployed by Horde. Provides basic functionality for performing
gradual roll-out, versioning, etc...
Unique identifier for the tool
Name of the tool
Description for the tool
Category to display the tool in on the dashboard
Grouping key to control how different tools should be merged on the dashboard
List of platforms that this tool supports, as NET runtime identifiers.
Current deployments of this tool, sorted by time.
Whether this tool should be exposed for download on a public endpoint without authentication
Whether this tool is bundled with the server
Whether to show this tool for download inside UGS
Whether to show this tool for download on the dashboard
Whether to show this tool for download in Unreal Toolbox
Metadata for the tool
Unique identifier for the tool
Name of the tool
Description for the tool
Category to display the tool in on the dashboard
Grouping key to control how different tools should be merged on the dashboard
List of platforms that this tool supports, as NET runtime identifiers.
Current deployments of this tool, sorted by time.
Whether this tool should be exposed for download on a public endpoint without authentication
Whether this tool is bundled with the server
Whether to show this tool for download inside UGS
Whether to show this tool for download on the dashboard
Whether to show this tool for download in Unreal Toolbox
Metadata for the tool
Summary for a particular tool.
Unique identifier for the tool
Name of the tool
Description for the tool
Category to display the tool in on the dashboard
Grouping key to control how different tools should be merged on the dashboard
List of platforms that this tool supports, as NET runtime identifiers.
Version number of the current deployment of this tool
Identifier for the current deployment
Current state of the deployment
Current progress of the deployment
Whether this tool is bundled with the server
Whether to show this tool for download inside UGS
Whether to show this tool for download on the dashboard
Whether to show this tool for download in the launcher
Metadata for the tool
Summary for a particular tool.
Unique identifier for the tool
Name of the tool
Description for the tool
Category to display the tool in on the dashboard
Grouping key to control how different tools should be merged on the dashboard
List of platforms that this tool supports, as NET runtime identifiers.
Version number of the current deployment of this tool
Identifier for the current deployment
Current state of the deployment
Current progress of the deployment
Whether this tool is bundled with the server
Whether to show this tool for download inside UGS
Whether to show this tool for download on the dashboard
Whether to show this tool for download in the launcher
Metadata for the tool
Unique identifier for the tool
Name of the tool
Description for the tool
Category to display the tool in on the dashboard
Grouping key to control how different tools should be merged on the dashboard
List of platforms that this tool supports, as NET runtime identifiers.
Version number of the current deployment of this tool
Identifier for the current deployment
Current state of the deployment
Current progress of the deployment
Whether this tool is bundled with the server
Whether to show this tool for download inside UGS
Whether to show this tool for download on the dashboard
Whether to show this tool for download in the launcher
Metadata for the tool
Response when querying all tools
List of tools currently available
Response when querying all tools
List of tools currently available
List of tools currently available
Response object describing the deployment of a tool
Identifier for this deployment. A new identifier will be assigned to each created instance, so an identifier corresponds to a unique deployment.
Descriptive version string for this tool revision
Current state of this deployment
Current progress of the deployment
Last time at which the progress started. Set to null if the deployment was paused.
Length of time over which to make the deployment
Reference to the deployment data
Reference to this tool in Horde Storage
Response object describing the deployment of a tool
Identifier for this deployment. A new identifier will be assigned to each created instance, so an identifier corresponds to a unique deployment.
Descriptive version string for this tool revision
Current state of this deployment
Current progress of the deployment
Last time at which the progress started. Set to null if the deployment was paused.
Length of time over which to make the deployment
Reference to the deployment data
Reference to this tool in Horde Storage
Identifier for this deployment. A new identifier will be assigned to each created instance, so an identifier corresponds to a unique deployment.
Descriptive version string for this tool revision
Current state of this deployment
Current progress of the deployment
Last time at which the progress started. Set to null if the deployment was paused.
Length of time over which to make the deployment
Reference to the deployment data
Reference to this tool in Horde Storage
Request for creating a new deployment
Nominal version string for this deployment
Number of minutes over which to do the deployment
Whether to create the deployment in a paused state
Handle to a directory node with the content for the deployment
Request for creating a new deployment
Nominal version string for this deployment
Number of minutes over which to do the deployment
Whether to create the deployment in a paused state
Handle to a directory node with the content for the deployment
Nominal version string for this deployment
Number of minutes over which to do the deployment
Whether to create the deployment in a paused state
Handle to a directory node with the content for the deployment
Response from creating a deployment
Identifier for the created deployment
Response from creating a deployment
Identifier for the created deployment
Identifier for the created deployment
Current state of a tool's deployment
The deployment is ongoing
The deployment should be paused at its current state
Deployment of this version is complete
The deployment has been cancelled.
Update an existing deployment
New state for the deployment
Action for a deployment
Query for information about the deployment
Download the deployment data
Download the deployment data as a zip file
Review by a user of a particular change
No vote for the current change
Succesfully compiled the change
Failed to compile the change
Manually marked the change as good
Manually marked the change as bad
State of a badge
Starting work on this badge, outcome currently unknown
Badge failed
Badge produced a warning
Badge succeeded
Badge was skipped
Adds a new badge to the change
Name of the badge
Url for the badge
Current status of the badge
Request object for adding new metadata to the server
The stream name
The changelist number
The project name
Name of the current user
Whether this changelist has been synced by the user
State of the user
New starred state for the issue
Whether the user is investigating
Comment for this change
List of badges to add
Information about a user synced to a change
Name of the user
Time that the change was synced
State of the user
Comment by this user
Whether the user is investigating this change
Whether this changelist is starred
Information about a badge
Name of the badge
Url for the badge
Current status of the badge
Response object for querying metadata
Number of the changelist
The project name
Information about a user synced to this change
Badges for this change
Response object for querying metadata
Last time that the metadata was modified
List of changes matching the requested criteria
Outcome of a particular build
Unknown outcome
Build succeeded
Build failed
Build finished with warnings
Legacy response describing a build
Identifier for this build
Path to the stream containing this build
The changelist number for this build
Name of this job
Link to the job
Name of the job step
Link to the job step
Url for this particular error
Outcome of this build (see )
Information about a diagnostic
The corresponding build id
Message for the diagnostic
Link to the error
Constructor
The corresponding build id
Message for the diagnostic
Link to the diagnostic
Stores information about a build health issue
Version number for this response
The unique object id
Time at which the issue was created
Time at which the issue was retrieved
The associated project for the issue
The summary text for this issue
Owner of the issue
User that nominated the current owner
Time that the issue was acknowledged
Changelist that fixed this issue
Time at which the issue was resolved
Whether to notify the user about this issue
Whether this issue just contains warnings
Link to the last build
List of streams affected by this issue
Request an issue to be updated
New owner of the issue
User than nominates the new owner
Whether the issue has been acknowledged
Name of the user that declines the issue
The change at which the issue is claimed fixed. 0 = not fixed, -1 = systemic issue.
Whether the issue should be marked as resolved
Name of the user that resolved the issue
Identifier for a user
Id to construct from
Identifier for a user
Id to construct from
Id to construct from
Constant value for empty user id
Special user id for an anonymous administrator
Converter to and from instances.
Response describing the current user
Id of the user
Name of the user
Avatar image URL (24px)
Avatar image URL (32px)
Avatar image URL (48px)
Avatar image URL (72px)
Email of the user
Claims for the user
Whether to enable experimental features for this user
Whether to always tag preflight changelists
Settings for the dashboard
Settings for whether various dashboard features should be shown for the current user
User job template preferences
List of pinned job ids
List of pinned bisection task ids
Constructor
New claim document
Type of the claim
Value for the claim
Constructor
Resolved permissions for an action in a given ACL scope
Scope name
Action name
Whether action is authorized in given scope
Constructor
Resolved permissions for ACL scopes for a given user
List of ACL permissions
Constructor
Job template settings for the current user
The stream the job was run in
The template id of the job
The hash of the template definition
The arguments defined when creating the job
The last update time of the job template
Constructor
Settings for whether various features should be enabled on the dashboard
Navigate to the landing page by default
Custom landing page route to direct users to
Enable CI functionality
Whether to show functionality related to agents, pools, and utilization on the dashboard.
Whether to show the agent registration page. When using registration tokens from elsewhere this is not needed.
Show the Perforce server option on the server menu
Show the device manager on the server menu
Show automated tests on the server menu
Whether the remote desktop button should be shown on the agent modal
Whether the notice editor should be listed in the server menu
Whether controls for modifying pools should be shown
Whether the remote desktop button should be shown on the agent modal
Basic information about a user. May be embedded in other responses.
Id of the user
Name of the user
The user's email address
The user login [DEPRECATED]
Constructor
Request to update settings for a user
Whether to enable experimental features for this user
Whether to always tag preflight CL
New dashboard settings
Job ids to add to the pinned list
Jobs ids to remove from the pinned list
Bisection task ids to add to the pinned list
Bisection task ids to remove from the pinned list
Converts TimeSpan intervals from formats like "30m", "1h30m", etc...
Parse a string as a time interval
OpenTelemetry configuration for collection and sending of traces and metrics.
Whether OpenTelemetry exporting is enabled
Service name
Service namespace
Service version
Whether to enrich and format telemetry to fit presentation in Datadog
Extra attributes to set
Whether to enable the console exporter (for debugging purposes)
Protocol exporters (key is a unique and arbitrary name)
Configuration for an OpenTelemetry exporter
Endpoint URL. Usually differs depending on protocol used.
Protocol for the exporter ('grpc' or 'httpprotobuf')
Provides extension methods for serializing and deserializing OpenTelemetrySettings
Serializes OpenTelemetrySettings to a JSON string, with an option to encode as base64
OpenTelemetrySettings to serialize.
If true, the resulting JSON string is encoded as base64
Deserializes a JSON string of OpenTelemetrySettings
The string to deserialize, which can be either a JSON string or a Base64 encoded JSON string.
If true, the input string is treated as base64 encoded
The deserialized OpenTelemetrySettings
Thrown when deserialization fails or results in a null object.
Wraps a semaphore controlling the number of open files at any time. Used to prevent exceeding handle limit on MacOS.
Acquire the platform file lock
Cancellation token for the operation
Parses a time of day into a number of minutes since midnight
Parse a string as a number of minutes since midnight
Searchable reference to a jobstep
Globally unique identifier for the jobstep being referenced
Name of the job
Name of the name
Unique id of the stream containing the job
Template for the job being executed
The change number being built
Log for this step
The agent type
The agent id
Outcome of the step, once complete.
Whether this step should update issues
Issues ids affecting this job step
The last change that succeeded. Note that this is only set when the ref is updated; it is not necessarily consistent with steps run later.
The last change that succeeded, or completed a warning. See .
Time taken for the batch containing this batch to start after it became ready
Time taken for this batch to initialize
Time at which the step started.
Time at which the step finished.
Attribute indicating that an object should generate a schema doc page
Page title
Rail to show with breadcrumbs at the top of the page
Output filename
Optional introductory text on the page
Constructor
Holder for reflection information generated from horde/log_rpc.proto
File descriptor for horde/log_rpc.proto
Service descriptor
Base class for server-side implementations of LogRpc
Update the current log state
The request received from the client.
The context of the server-side call handler being invoked.
The response to send back to the client (wrapped by a task).
Long poll for requests to return unflushed log tail data
Used for reading requests from the client.
Used for sending responses back to the client.
The context of the server-side call handler being invoked.
A task indicating completion of the handler.
Creates events for a log file, highlighting particular lines of interest
The request received from the client.
The context of the server-side call handler being invoked.
The response to send back to the client (wrapped by a task).
Client for LogRpc
Creates a new client for LogRpc
The channel to use to make remote calls.
Creates a new client for LogRpc that uses a custom CallInvoker.
The callInvoker to use to make remote calls.
Protected parameterless constructor to allow creation of test doubles.
Protected constructor to allow creation of configured clients.
The client configuration.
Update the current log state
The request to send to the server.
The initial metadata to send with the call. This parameter is optional.
An optional deadline for the call. The call will be cancelled if deadline is hit.
An optional token for canceling the call.
The response received from the server.
Update the current log state
The request to send to the server.
The options for the call.
The response received from the server.
Update the current log state
The request to send to the server.
The initial metadata to send with the call. This parameter is optional.
An optional deadline for the call. The call will be cancelled if deadline is hit.
An optional token for canceling the call.
The call object.
Update the current log state
The request to send to the server.
The options for the call.
The call object.
Long poll for requests to return unflushed log tail data
The initial metadata to send with the call. This parameter is optional.
An optional deadline for the call. The call will be cancelled if deadline is hit.
An optional token for canceling the call.
The call object.
Long poll for requests to return unflushed log tail data
The options for the call.
The call object.
Creates events for a log file, highlighting particular lines of interest
The request to send to the server.
The initial metadata to send with the call. This parameter is optional.
An optional deadline for the call. The call will be cancelled if deadline is hit.
An optional token for canceling the call.
The response received from the server.
Creates events for a log file, highlighting particular lines of interest
The request to send to the server.
The options for the call.
The response received from the server.
Creates events for a log file, highlighting particular lines of interest
The request to send to the server.
The initial metadata to send with the call. This parameter is optional.
An optional deadline for the call. The call will be cancelled if deadline is hit.
An optional token for canceling the call.
The call object.
Creates events for a log file, highlighting particular lines of interest
The request to send to the server.
The options for the call.
The call object.
Creates a new instance of client from given ClientBaseConfiguration.
Creates service definition that can be registered with a server
An object implementing the server-side handling logic.
Register service method with a service binder with or without implementation. Useful when customizing the service binding logic.
Note: this method is part of an experimental API that can change or be removed without any prior notice.
Service methods will be bound by calling AddMethod on this object.
An object implementing the server-side handling logic.
Holder for reflection information generated from horde/log_rpc_messages.proto
File descriptor for horde/log_rpc_messages.proto
Field number for the "LogId" field.
The unique log id
Field number for the "TargetHash" field.
Hash of the latest flushed node
Field number for the "TargetLocator" field.
Locator for the latest flushed node
Field number for the "LineCount" field.
Number of lines that have been flushed
Field number for the "Complete" field.
Whether the log is complete
Field number for the "LogId" field.
The unique log id
Field number for the "TailNext" field.
Starting line index of the new tail data
Field number for the "TailData" field.
New tail data to append (from LineCount backwards)
Field number for the "TailNext" field.
Index of the next requested tail line, or -1 if tailing is not desired.
Field number for the "Events" field.
List of events to send
Field number for the "Severity" field.
Severity of this event
Field number for the "LogId" field.
Unique id of the log containing this event
Field number for the "LineIndex" field.
Index of the first line relating to this event
Field number for the "LineCount" field.
Number of lines in this event