cape.cfdx.options.databookopts: Databook definition options

This module contains the basic interface that define the "DataBook" of a CAPE configuration, which controls which information from the CFD runs are extracted and collected elsewhere.

The DataBookOpts class defined here defines the list of databook components in the option Components, and each component has its own entry in the "DataBook" section. A component that has no further definitions is usually interpreted as a force & moment component (Type is "FM"), but other databook component types can also be used.

class cape.cfdx.options.databookopts.DataBookOpts(*args, **kw)

Dictionary-based interface for DataBook specifications

Call:
>>> opts = DataBookOpts(**kw)
Inputs:
kw: dict

Dictionary of options

Outputs:
opts: DataBookOpts

Data book options interface

classmethod add_compgetter(opt: str, prefix=None, name=None, doc=True)

Add getter method for option opt

For example cls.add_property("a") will add a function get_a(), which has a signatures like OptionsDict.get_opt() except that it doesn’t have the opt input.

Call:
>>> cls.add_compgetter(opt, prefix=None, name=None)
Inputs:
cls: type

A subclass of OptionsDict

opt: str

Name of option

prefix: {None} | str

Optional prefix in method name

name: {opt} | str

Alternate name to use in name of get and set functions

doc: {True} | False

Whether or not to add docstring to getter function

Versions:
  • 2022-11-08 @ddalle: v1.0

classmethod add_compgetters(optlist, prefix=None, name=None, doc=True)

Add list of component-specific getters with common settings

Call:
>>> cls.add_compgetters(optlist, prefix=None, name=None)
Inputs:
cls: type

A subclass of OptionsDict

optlist: list[str]

Name of options to process

prefix: {None} | str

Optional prefix, e.g. opt="a", prefix="my" will add functions get_my_a() and set_my_a()

name: {opt} | str

Alternate name to use in name of get and set functions

doc: {True} | False

Whether or not to add docstring to functions

Versions:
  • 2022-11-08 @ddalle: v1.0

assert_DataBookComponent(comp: str)

Ensure comp is in the list of "DataBook" components

Call:
>>> opts.assert_DataBookComponent(comp)
Inputs:
opts: cape.cfdx.options.Options

Options interface

comp: str

Name of databook component

Versions:
  • 2023-03-10 @ddalle: v1.0

assert_DataBookTarget(targ: str)

Ensure comp is in the list of "DataBook" components

Call:
>>> opts.assert_DataBookTarget(comp)
Inputs:
opts: cape.cfdx.options.Options

Options interface

targ: str

Name of databook target

Versions:
  • 2023-03-12 @ddalle: v1.0

get_CompTargets(comp: str)

Get the list of targets for a specific data book component

Call:
>>> targs = opts.get_CompTargets(comp)
Inputs:
opts: cape.cfdx.options.Options

Options interface

comp: str

Name of component

Outputs:
targs: list[str]

List of targets for that component

Versions:
  • 2014-12-21 @ddalle: v1.0

get_DataBookAbsProjTol(comp=None, j=None, i=None, **kw)

Get absolute projection tolerance

Call:
>>> AbsProjTol = opts.get_DataBookAbsProjTol(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
AbsProjTol: {None} | object

absolute projection tolerance

get_DataBookAbsTol(comp=None, j=None, i=None, **kw)

Get absolute tangent tolerance for surface mapping

Call:
>>> AbsTol = opts.get_DataBookAbsTol(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
AbsTol: {None} | object

absolute tangent tolerance for surface mapping

get_DataBookByGlob(typ, pat=None)

Get list of components by type and list of wild cards

Call:
>>> comps = opts.get_DataBookByGlob(typ, pat=None)
Inputs:
opts: cape.cfdx.options.Options

Options interface

typ: "FM" | str

Target value for "Type" of matching components

pat: {None} | str | list

List of component name patterns

Outputs:
comps: str

All components meeting one or more wild cards

Versions:
  • 2017-04-25 @ddalle: v1.0

  • 2023-02-06 @ddalle: v1.1; improved naming

  • 2023-03-09 @ddalle: v1.2; validate typ

get_DataBookByType(typ: str) list

Get the list of data book components with a given type

Call:
>>> comps = opts.get_DataBookByType(typ)
Inputs:
opts: cape.cfdx.options.Options

Options interface

typ: "FM" | "LineLoad" | str

Data book type

Outputs:
comps: list[str]

List of components with "Type" matching typ

Versions:
  • 2016-06-07 @ddalle: v1.0

  • 2023-03-09 @ddalle: v1.1; validate typ

get_DataBookColStats(comp: str, col: str) list

Get list of statistical properties for a databook column

Call:
>>> sts = opts.get_DataBookColStats(comp, col)
Inputs:
opts: cape.cfdx.options.Options

Options interface

comp: str

Name of data book component

col: str

Name of data book col, "CA", "CY", etc.

Outputs:
sts: list[str]

List of statistical properties for this col; values include mu (mean), min, max, std, and err

Versions:
  • 2016-03-15 @ddalle: v1.0

  • 2023-03-12 @ddalle: v1.1; optdict; needs override

get_DataBookCols(comp=None, j=None, i=None, **kw)

Get value of option “Cols”

Call:
>>> Cols = opts.get_DataBookCols(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
Cols: {None} | object

value of option “Cols”

get_DataBookCompID(comp: str, **kw)

Get CompID opton for a component

Call:
>>> compid = opts.get_DataBookCompID(comp, **kw)
Inputs:
opts: cape.cfdx.options.Options

Options interface

comp: str

Name of databook component

Outputs:
compid: int | str | list

Value of opt from either opts or opts[comp]

Versions:
  • 2023-01-22 @ddalle: v1.0

get_DataBookCompProjTol(comp=None, j=None, i=None, **kw)

Get projection tolerance relative to size of component

Call:
>>> CompProjTol = opts.get_DataBookCompProjTol(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
CompProjTol: {None} | object

projection tolerance relative to size of component

get_DataBookCompTol(comp=None, j=None, i=None, **kw)

Get tangent tolerance relative to component

Call:
>>> CompTol = opts.get_DataBookCompTol(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
CompTol: {None} | object

tangent tolerance relative to component

get_DataBookComponents(j=None, i=None, **kw)

Get list of databook components

Call:
>>> Components = opts.get_DataBookComponents(j=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

j: {None} | int

Phase index; use None to just return v

i: {None} | int | np.ndarray

opts.x index(es) to use with @expr, @map, etc.

vdef: {None} | object

Manual default

mode: {None} | 0 | 1 | 2 | 3

Warning mode code

0:

no checks

1:

validate silently

2:

validate and show warnings

3:

raise an exception if invalid

ring: {opts._optring[key]} | True | False

Override option to loop through phase inputs

listdepth: {0} | int > 0

Depth of list to treat as a scalar

x: {None} | dict

Ref conditions to use with @expr, @map, etc.; often a run matrix; used in combination with i

sample: {True} | False

Apply j, i, and other settings recursively if output is a list or dict

Outputs:
Components: {None} | str

list of databook components

get_DataBookConfigCompID(comp=None, j=None, i=None, **kw)

Get value of option “ConfigCompID”

Call:
>>> ConfigCompID = opts.get_DataBookConfigCompID(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
ConfigCompID: {None} | object

value of option “ConfigCompID”

get_DataBookConfigFile(comp=None, j=None, i=None, **kw)

Get value of option “ConfigFile”

Call:
>>> ConfigFile = opts.get_DataBookConfigFile(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
ConfigFile: {None} | object

value of option “ConfigFile”

get_DataBookDNStats(comp=None, j=None, i=None, **kw)

Get increment for candidate window sizes

Call:
>>> DNStats = opts.get_DataBookDNStats(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
DNStats: {None} | int

increment for candidate window sizes

get_DataBookDataCols(comp: str)

Get the list of data book columns for a specific component

This includes the list of coefficients, e.g. ['CA', 'CY', 'CN']; statistics such as 'CA_min' if nStats is greater than 0.

Call:
>>> cols = opts.get_DataBookDataCols(comp)
Inputs:
opts: cape.cfdx.options.Options

Options interface

comp: str

Name of component

Outputs:
cols: list[str]

List of coefficients and other columns for that coefficient

Versions:
  • 2014-12-21 @ddalle: v1.0

  • 2022-04-08 @ddalle: v2.0; cooeff-spec suffixes

  • 2023-03-12 @ddalle: v3.0; use optdict

get_DataBookDelimiter(comp=None, j=None, i=None, **kw)

Get delimiter to use in databook files

Call:
>>> Delimiter = opts.get_DataBookDelimiter(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
Delimiter: {','} | str

delimiter to use in databook files

get_DataBookFloatCols(comp=None, j=None, i=None, **kw)

Get additional databook cols with floating-point values

Call:
>>> FloatCols = opts.get_DataBookFloatCols(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
FloatCols: {None} | object

additional databook cols with floating-point values

get_DataBookFolder(j=None, i=None, **kw)

Get folder for root of databook

Call:
>>> Folder = opts.get_DataBookFolder(j=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

j: {None} | int

Phase index; use None to just return v

i: {None} | int | np.ndarray

opts.x index(es) to use with @expr, @map, etc.

vdef: {None} | object

Manual default

mode: {None} | 0 | 1 | 2 | 3

Warning mode code

0:

no checks

1:

validate silently

2:

validate and show warnings

3:

raise an exception if invalid

ring: {opts._optring[key]} | True | False

Override option to loop through phase inputs

listdepth: {0} | int > 0

Depth of list to treat as a scalar

x: {None} | dict

Ref conditions to use with @expr, @map, etc.; often a run matrix; used in combination with i

sample: {True} | False

Apply j, i, and other settings recursively if output is a list or dict

Outputs:
Folder: {'data'} | str

folder for root of databook

get_DataBookFunction(comp=None, j=None, i=None, **kw)

Get value of option “Function”

Call:
>>> Function = opts.get_DataBookFunction(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
Function: {None} | object

value of option “Function”

get_DataBookGauge(comp=None, j=None, i=None, **kw)

Get option to use gauge pressures in computations

Call:
>>> Gauge = opts.get_DataBookGauge(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
Gauge: {None} | object

option to use gauge pressures in computations

get_DataBookIntCols(comp=None, j=None, i=None, **kw)

Get value of option “IntCols”

Call:
>>> IntCols = opts.get_DataBookIntCols(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
IntCols: {None} | object

value of option “IntCols”

get_DataBookMapTri(comp=None, j=None, i=None, **kw)

Get name of a tri file to use for remapping CFD surface comps

Call:
>>> MapTri = opts.get_DataBookMapTri(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
MapTri: {None} | object

name of a tri file to use for remapping CFD surface comps

get_DataBookMomentum(comp=None, j=None, i=None, **kw)

Get whether to use momentum flux in force computations

Call:
>>> Momentum = opts.get_DataBookMomentum(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
Momentum: {None} | object

whether to use momentum flux in force computations

get_DataBookNCut(comp=None, j=None, i=None, **kw)

Get number of 'LineLoad' cuts for triload

Call:
>>> NCut = opts.get_DataBookNCut(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
NCut: {None} | object

number of 'LineLoad' cuts for triload

get_DataBookNMaxStats(comp=None, j=None, i=None, **kw)

Get max number of iters to include in averaging window

Call:
>>> NMaxStats = opts.get_DataBookNMaxStats(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
NMaxStats: {None} | int

max number of iters to include in averaging window

get_DataBookNMin(comp=None, j=None, i=None, **kw)

Get first iter to consider for use in databook [for a comp]

Call:
>>> NMin = opts.get_DataBookNMin(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
NMin: {0} | int

first iter to consider for use in databook [for a comp]

get_DataBookNStats(comp=None, j=None, i=None, **kw)

Get iterations to use in averaging window [for a comp]

Call:
>>> NStats = opts.get_DataBookNStats(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
NStats: {0} | int

iterations to use in averaging window [for a comp]

get_DataBookOpt(comp: str, opt: str, check=True, **kw)

Get an option for a specific component

Call:
>>> v = opts.get_DataBookOpt(comp, opt, **kw)
Inputs:
opts: cape.cfdx.options.Options

Options interface

comp: str

Name of specific databook component

opt: str

Name of option to access

check: {True} | False

Option to fail if comp not present

Outputs:
v: object

Value of opt from either opts or opts[comp]

Versions:
  • 2024-01-19 @ddalle: v1.0

  • 2024-05-20 @ddalle: v1.1; add check

get_DataBookOutputFormat(comp=None, j=None, i=None, **kw)

Get value of option “OutputFormat”

Call:
>>> OutputFormat = opts.get_DataBookOutputFormat(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
OutputFormat: {None} | object

value of option “OutputFormat”

get_DataBookPatches(comp=None, j=None, i=None, **kw)

Get list of patches for a databook component

Call:
>>> Patches = opts.get_DataBookPatches(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
Patches: {None} | object

list of patches for a databook component

get_DataBookPoints(comp=None, j=None, i=None, **kw)

Get list of individual point sensors

Call:
>>> Points = opts.get_DataBookPoints(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
Points: {None} | object

list of individual point sensors

get_DataBookPrefix(comp=None, j=None, i=None, **kw)

Get value of option “Prefix”

Call:
>>> Prefix = opts.get_DataBookPrefix(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
Prefix: {None} | object

value of option “Prefix”

get_DataBookRelProjTol(comp=None, j=None, i=None, **kw)

Get projection tolerance relative to size of geometry

Call:
>>> RelProjTol = opts.get_DataBookRelProjTol(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
RelProjTol: {None} | object

projection tolerance relative to size of geometry

get_DataBookRelTol(comp=None, j=None, i=None, **kw)

Get tangent tolerance relative to overall geometry scale

Call:
>>> RelTol = opts.get_DataBookRelTol(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
RelTol: {None} | object

tangent tolerance relative to overall geometry scale

get_DataBookSectionType(comp=None, j=None, i=None, **kw)

Get line load section type

Call:
>>> SectionType = opts.get_DataBookSectionType(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
SectionType: {None} | object

line load section type

get_DataBookTargetByName(name: str)

Get a data book target by Name, using user-defined name

Call:
>>> topts = opts.get_DataBookTargetByName(name)
Inputs:
opts: cape.cfdx.options.Options

Options interface

name: str

Name of the data book target

Outputs:
topts: DBTargetOpts

Databook target options

Versions:
  • 2015-12-15 @ddalle: v1.0

  • 2023-03-12 @ddalle: v2.0; use optdict

get_DataBookTargetCommentChar(targ: str, j=None, i=None, **kw)

Get value of option “CommentChar”

Call:
>>> CommentChar = opts.get_DataBookTargetCommentChar(targ, i=None, **kw)
Inputs:
opts: DBTargetCollectionOpts

options interface

targ: str

Name of databook target

i: {None} | int

Case index

Outputs:
CommentChar: {None} | object

value of option “CommentChar”

get_DataBookTargetComponents(targ: str, j=None, i=None, **kw)

Get value of option “Components”

Call:
>>> Components = opts.get_DataBookTargetComponents(targ, i=None, **kw)
Inputs:
opts: DBTargetCollectionOpts

options interface

targ: str

Name of databook target

i: {None} | int

Case index

Outputs:
Components: {None} | object

value of option “Components”

get_DataBookTargetDelimiter(targ: str, j=None, i=None, **kw)

Get value of option “Delimiter”

Call:
>>> Delimiter = opts.get_DataBookTargetDelimiter(targ, i=None, **kw)
Inputs:
opts: DBTargetCollectionOpts

options interface

targ: str

Name of databook target

i: {None} | int

Case index

Outputs:
Delimiter: {None} | object

value of option “Delimiter”

get_DataBookTargetFile(targ: str, j=None, i=None, **kw)

Get value of option “File”

Call:
>>> File = opts.get_DataBookTargetFile(targ, i=None, **kw)
Inputs:
opts: DBTargetCollectionOpts

options interface

targ: str

Name of databook target

i: {None} | int

Case index

Outputs:
File: {None} | object

value of option “File”

get_DataBookTargetFolder(targ: str, j=None, i=None, **kw)

Get value of option “Folder”

Call:
>>> Folder = opts.get_DataBookTargetFolder(targ, i=None, **kw)
Inputs:
opts: DBTargetCollectionOpts

options interface

targ: str

Name of databook target

i: {None} | int

Case index

Outputs:
Folder: {None} | object

value of option “Folder”

get_DataBookTargetLabel(targ: str, **kw)

Get Label from databook target, falling back to Name

Call:
>>> lbl = opts.get_DataBookTargetLabel(targ, **kw)
Inputs:
opts: cape.cfdx.options.Options

Options interface

targ: str

Name of databook target

Outputs:
lbl: targ | str

User-defined Label of target or Name or targ

Versions:
  • 2023-03-12 @ddalle: v1.0

get_DataBookTargetName(targ: str, **kw)

Get Name from databook target, falling back to targ

Call:
>>> name = opts.get_DataBookTargetName(targ, **kw)
Inputs:
opts: cape.cfdx.options.Options

Options interface

targ: str

Name of databook target

Outputs:
name: targ | str

User-defined Name of target or targ

Versions:
  • 2023-03-12 @ddalle: v1.0

get_DataBookTargetTolerances(targ: str, j=None, i=None, **kw)

Get value of option “Tolerances”

Call:
>>> Tolerances = opts.get_DataBookTargetTolerances(targ, i=None, **kw)
Inputs:
opts: DBTargetCollectionOpts

options interface

targ: str

Name of databook target

i: {None} | int

Case index

Outputs:
Tolerances: {None} | object

value of option “Tolerances”

get_DataBookTargetTranslations(targ: str, j=None, i=None, **kw)

Get value of option “Translations”

Call:
>>> Translations = opts.get_DataBookTargetTranslations(targ, i=None, **kw)
Inputs:
opts: DBTargetCollectionOpts

options interface

targ: str

Name of databook target

i: {None} | int

Case index

Outputs:
Translations: {None} | object

value of option “Translations”

get_DataBookTargetType(targ: str, j=None, i=None, **kw)

Get value of option “Type”

Call:
>>> Type = opts.get_DataBookTargetType(targ, i=None, **kw)
Inputs:
opts: DBTargetCollectionOpts

options interface

targ: str

Name of databook target

i: {None} | int

Case index

Outputs:
Type: {None} | object

value of option “Type”

get_DataBookTargets(j=None, i=None, **kw)

Get value of option “Targets”

Call:
>>> Targets = opts.get_DataBookTargets(j=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

j: {None} | int

Phase index; use None to just return v

i: {None} | int | np.ndarray

opts.x index(es) to use with @expr, @map, etc.

vdef: {None} | object

Manual default

mode: {None} | 0 | 1 | 2 | 3

Warning mode code

0:

no checks

1:

validate silently

2:

validate and show warnings

3:

raise an exception if invalid

ring: {opts._optring[key]} | True | False

Override option to loop through phase inputs

listdepth: {0} | int > 0

Depth of list to treat as a scalar

x: {None} | dict

Ref conditions to use with @expr, @map, etc.; often a run matrix; used in combination with i

sample: {True} | False

Apply j, i, and other settings recursively if output is a list or dict

Outputs:
Targets: {None} | object

value of option “Targets”

get_DataBookTransformations(comp=None, j=None, i=None, **kw)

Get value of option “Transformations”

Call:
>>> Transformations = opts.get_DataBookTransformations(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
Transformations: {None} | object

value of option “Transformations”

get_DataBookTrim(comp=None, j=None, i=None, **kw)

Get trim flag to triload

Call:
>>> Trim = opts.get_DataBookTrim(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
Trim: {None} | object

trim flag to triload

get_DataBookTriqFormat(comp=None, j=None, i=None, **kw)

Get file format for any .triq files to read

Call:
>>> TriqFormat = opts.get_DataBookTriqFormat(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
TriqFormat: {None} | object

file format for any .triq files to read

get_DataBookType(comp=None, j=None, i=None, **kw)

Get Default component type

Call:
>>> Type = opts.get_DataBookType(comp=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

comp: {None} | str

Name of databook component

i: {None} | int

Case index

Outputs:
Type: 'CaseProp' | {'FM'} | 'IterPoint' | 'LineLoad' | 'PyFunc' | 'TriqFM' | 'TriqPoint'

Default component type

set_DataBookComponents(v, j=None, mode=None)

Get list of databook components

Call:
>>> opts.set_DataBookComponents(Components, j=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

Components: {None} | str

list of databook components

j: {None} | int

Phase index; use None to just return v

mode: {None} | 0 | 1 | 2 | 3

Warning mode code

0:

no checks

1:

validate silently

2:

validate and show warnings

3:

raise an exception if invalid

listdepth: {0} | int > 0

Depth of list to treat as a scalar

set_DataBookDNStats(v, j=None, mode=None)

Get increment for candidate window sizes

Call:
>>> opts.set_DataBookDNStats(DNStats, j=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

DNStats: {None} | int

increment for candidate window sizes

j: {None} | int

Phase index; use None to just return v

mode: {None} | 0 | 1 | 2 | 3

Warning mode code

0:

no checks

1:

validate silently

2:

validate and show warnings

3:

raise an exception if invalid

listdepth: {0} | int > 0

Depth of list to treat as a scalar

set_DataBookDelimiter(v, j=None, mode=None)

Get delimiter to use in databook files

Call:
>>> opts.set_DataBookDelimiter(Delimiter, j=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

Delimiter: {','} | str

delimiter to use in databook files

j: {None} | int

Phase index; use None to just return v

mode: {None} | 0 | 1 | 2 | 3

Warning mode code

0:

no checks

1:

validate silently

2:

validate and show warnings

3:

raise an exception if invalid

listdepth: {0} | int > 0

Depth of list to treat as a scalar

set_DataBookFolder(v, j=None, mode=None)

Get folder for root of databook

Call:
>>> opts.set_DataBookFolder(Folder, j=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

Folder: {'data'} | str

folder for root of databook

j: {None} | int

Phase index; use None to just return v

mode: {None} | 0 | 1 | 2 | 3

Warning mode code

0:

no checks

1:

validate silently

2:

validate and show warnings

3:

raise an exception if invalid

listdepth: {0} | int > 0

Depth of list to treat as a scalar

set_DataBookNMaxStats(v, j=None, mode=None)

Get max number of iters to include in averaging window

Call:
>>> opts.set_DataBookNMaxStats(NMaxStats, j=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

NMaxStats: {None} | int

max number of iters to include in averaging window

j: {None} | int

Phase index; use None to just return v

mode: {None} | 0 | 1 | 2 | 3

Warning mode code

0:

no checks

1:

validate silently

2:

validate and show warnings

3:

raise an exception if invalid

listdepth: {0} | int > 0

Depth of list to treat as a scalar

set_DataBookNMin(v, j=None, mode=None)

Get first iter to consider for use in databook [for a comp]

Call:
>>> opts.set_DataBookNMin(NMin, j=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

NMin: {0} | int

first iter to consider for use in databook [for a comp]

j: {None} | int

Phase index; use None to just return v

mode: {None} | 0 | 1 | 2 | 3

Warning mode code

0:

no checks

1:

validate silently

2:

validate and show warnings

3:

raise an exception if invalid

listdepth: {0} | int > 0

Depth of list to treat as a scalar

set_DataBookNStats(v, j=None, mode=None)

Get iterations to use in averaging window [for a comp]

Call:
>>> opts.set_DataBookNStats(NStats, j=None, i=None, **kw)
Inputs:
opts: DataBookOpts

options interface

NStats: {0} | int

iterations to use in averaging window [for a comp]

j: {None} | int

Phase index; use None to just return v

mode: {None} | 0 | 1 | 2 | 3

Warning mode code

0:

no checks

1:

validate silently

2:

validate and show warnings

3:

raise an exception if invalid

listdepth: {0} | int > 0

Depth of list to treat as a scalar

validate_DataBookType(typ: str)

Ensure that typ is a recognized DataBook Type

Call:
>>> opts.validate_DataBookType(typ)
Inputs:
opts: cape.cfdx.options.Options

Options interface

typ: "FM" | str

Target value for "Type" of matching components

Raises:

ValueError

Versions:
  • 2023-03-09 @ddalle: v1.0

class cape.cfdx.options.databookopts.DBTargetOpts(*args, **kw)

Dictionary-based interface for data book targets

Call:
>>> opts = DBTarget(fjson, **kw)
>>> opts = DBTarget(mydict, **kw)
Inputs:
fjson: {None} | str

Name of JSON file with settings

mydict: dict

Existing options structure

kw: dict

Additional options from keyword arguments

Outputs:
opts: DBTargetOptions

Data book target options interface

Versions:
  • 2014-12-01 @ddalle: v1.0

  • 2023-03-11 @ddalle: v2.0; use optdict

get_CommentChar(j=None, i=None, **kw)

Get Character(s) denoting a comment line in target file

Call:
>>> CommentChar = opts.get_CommentChar(j=None, i=None, **kw)
Inputs:
opts: DBTargetOpts

options interface

j: {None} | int

Phase index; use None to just return v

i: {None} | int | np.ndarray

opts.x index(es) to use with @expr, @map, etc.

vdef: {None} | object

Manual default

mode: {None} | 0 | 1 | 2 | 3

Warning mode code

0:

no checks

1:

validate silently

2:

validate and show warnings

3:

raise an exception if invalid

ring: {opts._optring[key]} | True | False

Override option to loop through phase inputs

listdepth: {0} | int > 0

Depth of list to treat as a scalar

x: {None} | dict

Ref conditions to use with @expr, @map, etc.; often a run matrix; used in combination with i

sample: {True} | False

Apply j, i, and other settings recursively if output is a list or dict

Outputs:
CommentChar: {'#'} | str

Character(s) denoting a comment line in target file

get_Components(j=None, i=None, **kw)

Get List of databook components with data from this target

Call:
>>> Components = opts.get_Components(j=None, i=None, **kw)
Inputs:
opts: DBTargetOpts

options interface

j: {None} | int

Phase index; use None to just return v

i: {None} | int | np.ndarray

opts.x index(es) to use with @expr, @map, etc.

vdef: {None} | object

Manual default

mode: {None} | 0 | 1 | 2 | 3

Warning mode code

0:

no checks

1:

validate silently

2:

validate and show warnings

3:

raise an exception if invalid

ring: {opts._optring[key]} | True | False

Override option to loop through phase inputs

listdepth: {0} | int > 0

Depth of list to treat as a scalar

x: {None} | dict

Ref conditions to use with @expr, @map, etc.; often a run matrix; used in combination with i

sample: {True} | False

Apply j, i, and other settings recursively if output is a list or dict

Outputs:
Components: {None} | list[str]

List of databook components with data from this target

get_Delimiter(j=None, i=None, **kw)

Get Delimiter in databook target data file

Call:
>>> Delimiter = opts.get_Delimiter(j=None, i=None, **kw)
Inputs:
opts: DBTargetOpts

options interface

j: {None} | int

Phase index; use None to just return v

i: {None} | int | np.ndarray

opts.x index(es) to use with @expr, @map, etc.

vdef: {None} | object

Manual default

mode: {None} | 0 | 1 | 2 | 3

Warning mode code

0:

no checks

1:

validate silently

2:

validate and show warnings

3:

raise an exception if invalid

ring: {opts._optring[key]} | True | False

Override option to loop through phase inputs

listdepth: {0} | int > 0

Depth of list to treat as a scalar

x: {None} | dict

Ref conditions to use with @expr, @map, etc.; often a run matrix; used in combination with i

sample: {True} | False

Apply j, i, and other settings recursively if output is a list or dict

Outputs:
Delimiter: {','} | str

Delimiter in databook target data file

get_File(j=None, i=None, **kw)

Get Name of file from which to read data

Call:
>>> File = opts.get_File(j=None, i=None, **kw)
Inputs:
opts: DBTargetOpts

options interface

j: {None} | int

Phase index; use None to just return v

i: {None} | int | np.ndarray

opts.x index(es) to use with @expr, @map, etc.

vdef: {None} | object

Manual default

mode: {None} | 0 | 1 | 2 | 3

Warning mode code

0:

no checks

1:

validate silently

2:

validate and show warnings

3:

raise an exception if invalid

ring: {opts._optring[key]} | True | False

Override option to loop through phase inputs

listdepth: {0} | int > 0

Depth of list to treat as a scalar

x: {None} | dict

Ref conditions to use with @expr, @map, etc.; often a run matrix; used in combination with i

sample: {True} | False

Apply j, i, and other settings recursively if output is a list or dict

Outputs:
File: {None} | str

Name of file from which to read data

get_Folder(j=None, i=None, **kw)

Get Name of folder from which to read data

Call:
>>> Folder = opts.get_Folder(j=None, i=None, **kw)
Inputs:
opts: DBTargetOpts

options interface

j: {None} | int

Phase index; use None to just return v

i: {None} | int | np.ndarray

opts.x index(es) to use with @expr, @map, etc.

vdef: {None} | object

Manual default

mode: {None} | 0 | 1 | 2 | 3

Warning mode code

0:

no checks

1:

validate silently

2:

validate and show warnings

3:

raise an exception if invalid

ring: {opts._optring[key]} | True | False

Override option to loop through phase inputs

listdepth: {0} | int > 0

Depth of list to treat as a scalar

x: {None} | dict

Ref conditions to use with @expr, @map, etc.; often a run matrix; used in combination with i

sample: {True} | False

Apply j, i, and other settings recursively if output is a list or dict

Outputs:
Folder: {'data'} | str

Name of folder from which to read data

get_Label(j=None, i=None, **kw)

Get Label to use when plotting this target

Call:
>>> Label = opts.get_Label(j=None, i=None, **kw)
Inputs:
opts: DBTargetOpts

options interface

j: {None} | int

Phase index; use None to just return v

i: {None} | int | np.ndarray

opts.x index(es) to use with @expr, @map, etc.

vdef: {None} | object

Manual default

mode: {None} | 0 | 1 | 2 | 3

Warning mode code

0:

no checks

1:

validate silently

2:

validate and show warnings

3:

raise an exception if invalid

ring: {opts._optring[key]} | True | False

Override option to loop through phase inputs

listdepth: {0} | int > 0

Depth of list to treat as a scalar

x: {None} | dict

Ref conditions to use with @expr, @map, etc.; often a run matrix; used in combination with i

sample: {True} | False

Apply j, i, and other settings recursively if output is a list or dict

Outputs:
Label: {None} | str

Label to use when plotting this target

get_Name(j=None, i=None, **kw)

Get Internal name to use for target

Call:
>>> Name = opts.get_Name(j=None, i=None, **kw)
Inputs:
opts: DBTargetOpts

options interface

j: {None} | int

Phase index; use None to just return v

i: {None} | int | np.ndarray

opts.x index(es) to use with @expr, @map, etc.

vdef: {None} | object

Manual default

mode: {None} | 0 | 1 | 2 | 3

Warning mode code

0:

no checks

1:

validate silently

2:

validate and show warnings

3:

raise an exception if invalid

ring: {opts._optring[key]} | True | False

Override option to loop through phase inputs

listdepth: {0} | int > 0

Depth of list to treat as a scalar

x: {None} | dict

Ref conditions to use with @expr, @map, etc.; often a run matrix; used in combination with i

sample: {True} | False

Apply j, i, and other settings recursively if output is a list or dict

Outputs:
Name: {None} | str

Internal name to use for target

get_Tol(col: str)

Get the tolerance for a particular trajectory key

Call:
>>> tol = opts.get_Tol(xk)
Inputs:
opts: DBTargetOpts

Options interface

col: str

Name of trajectory key

Outputs:
tol: {None} | float

Max distance for a match for column col

Versions:
  • 2015-12-16 @ddalle: v1.0

  • 2023-03-11 @ddalle: v2.0

get_Tolerances(j=None, i=None, **kw)

Get Dictionary of tolerances for run matrix keys

Call:
>>> Tolerances = opts.get_Tolerances(j=None, i=None, **kw)
Inputs:
opts: DBTargetOpts

options interface

j: {None} | int

Phase index; use None to just return v

i: {None} | int | np.ndarray

opts.x index(es) to use with @expr, @map, etc.

vdef: {None} | object

Manual default

mode: {None} | 0 | 1 | 2 | 3

Warning mode code

0:

no checks

1:

validate silently

2:

validate and show warnings

3:

raise an exception if invalid

ring: {opts._optring[key]} | True | False

Override option to loop through phase inputs

listdepth: {0} | int > 0

Depth of list to treat as a scalar

x: {None} | dict

Ref conditions to use with @expr, @map, etc.; often a run matrix; used in combination with i

sample: {True} | False

Apply j, i, and other settings recursively if output is a list or dict

Outputs:
Tolerances: {None} | dict

Dictionary of tolerances for run matrix keys

get_Translations(j=None, i=None, **kw)

Get value of option “Translations”

Call:
>>> Translations = opts.get_Translations(j=None, i=None, **kw)
Inputs:
opts: DBTargetOpts

options interface

j: {None} | int

Phase index; use None to just return v

i: {None} | int | np.ndarray

opts.x index(es) to use with @expr, @map, etc.

vdef: {None} | object

Manual default

mode: {None} | 0 | 1 | 2 | 3

Warning mode code

0:

no checks

1:

validate silently

2:

validate and show warnings

3:

raise an exception if invalid

ring: {opts._optring[key]} | True | False

Override option to loop through phase inputs

listdepth: {0} | int > 0

Depth of list to treat as a scalar

x: {None} | dict

Ref conditions to use with @expr, @map, etc.; often a run matrix; used in combination with i

sample: {True} | False

Apply j, i, and other settings recursively if output is a list or dict

Outputs:
Translations: {None} | dict

value of option “Translations”

get_Type(j=None, i=None, **kw)

Get DataBook Target type

Call:
>>> Type = opts.get_Type(j=None, i=None, **kw)
Inputs:
opts: DBTargetOpts

options interface

j: {None} | int

Phase index; use None to just return v

i: {None} | int | np.ndarray

opts.x index(es) to use with @expr, @map, etc.

vdef: {None} | object

Manual default

mode: {None} | 0 | 1 | 2 | 3

Warning mode code

0:

no checks

1:

validate silently

2:

validate and show warnings

3:

raise an exception if invalid

ring: {opts._optring[key]} | True | False

Override option to loop through phase inputs

listdepth: {0} | int > 0

Depth of list to treat as a scalar

x: {None} | dict

Ref conditions to use with @expr, @map, etc.; often a run matrix; used in combination with i

sample: {True} | False

Apply j, i, and other settings recursively if output is a list or dict

Outputs:
Type: 'databook' | {'generic'}

DataBook Target type