cape.attdb.rdb
: Main DataKit module¶
This module provides the class DataKit
as a subclass of
dict
that contains methods common to each of the other database
classes. The DataKit
class provides an interface to both store
the data and create and call “response surfaces” that define specific,
potentially complex interpolation methods to evaluate the data as a
function of several independent variables.
Finally, having this common template class provides a single point of
entry for testing if an object is based on a product of the
cape.attdb.rdb
module. The following Python sample tests if
any Python object db is an instance of any class from this data-file
collection.
isinstance(db, cape.attdb.rdb.DataKit)
This class is the basic data container for ATTDB databases and has interfaces to several different file types.
- class cape.attdb.rdb.DataKit(fname=None, **kw)¶
Basic database template without responses
- Call:
>>> db = DataKit(fname=None, **kw) >>> db = DataKit(db)
- Inputs:
- fname: {
None
} |str
File name; extension is used to guess data format
- db:
DataKit
DataKit from which to link data and defns
- csv: {
None
} |str
Explicit file name for
CSVFile
read- textdata: {
None
} |str
Explicit file name for
TextDataFile
- simplecsv: {
None
} |str
Explicit file name for
CSVSimple
- simpletsv: {
None
} |str
Explicit file name for
TSVSimple
- xls: {
None
} |str
File name for
XLSFile
- mat: {
None
} |str
File name for
MATFile
- fname: {
- Outputs:
- db:
DataKit
Generic database
- db:
- Versions:
2019-12-04
@ddalle
: Version 1.02020-02-19
@ddalle
: Version 1.1; wasDBResponseNull
- __call__(*a, **kw)¶
Generic evaluation function
- Call:
>>> v = db(*a, **kw) >>> v = db(col, x0, x1, ...) >>> V = db(col, x0, X1, ...) >>> v = db(col, k0=x0, k1=x1, ...) >>> V = db(col, k0=x0, k1=X1, ...)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of column to evaluate
- x0:
float
|int
Numeric value for first argument to col response
- x1:
float
|int
Numeric value for second argument to col response
- X1:
np.ndarray
[float
] Array of x1 values
- k0:
str
|unicode
Name of first argument to col response
- k1:
str
|unicode
Name of second argument to col response
- db:
- Outputs:
- v:
float
|int
Function output for scalar evaluation
- V:
np.ndarray
[float
] Array of function outputs
- v:
- Versions:
2019-01-07
@ddalle
: Version 1.02019-12-30
@ddalle
: Version 2.0: map of methods
- __init__(fname=None, **kw)¶
Initialization method
- Versions:
2019-12-06
@ddalle
: Version 1.0
- add_png_fig(png, fig)¶
Add figure handle to set of active figs for PNG tag
- Call:
>>> db.add_png_fig(png, fig)
- Inputs:
- db:
DataKit
Database with scalar output functions
- png:
str
Name/abbreviation/tag of PNG image to use
- fig:
matplotlib.figure.Figure
Figure handle
- db:
- Effects:
- db.png_figs[png]:
set
Adds fig to
set
if not already present
- db.png_figs[png]:
- Versions:
2020-04-01
@ddalle
: Version 1.0
- add_seam_fig(seam, fig)¶
Add figure handle to set of active figs for seam curve tag
- Call:
>>> db.add_seam_fig(seam, fig)
- Inputs:
- db:
DataKit
Database with scalar output functions
- seam:
str
Name used to tag this seam curve
- fig:
matplotlib.figure.Figure
Figure handle
- db:
- Effects:
- db.seam_figs[seam]:
set
Adds fig to
set
if not already present
- db.seam_figs[seam]:
- Versions:
2020-04-01
@ddalle
: Version 1.0
- append_colname(col, suffix)¶
Add a suffix to a column name
This maintains component names, so for example if col is
"bullet.CLM"
, and suffix is"X"
, the result is"bullet.CLMX"
.- Call:
>>> newcol = db.append_colname(col, suffix)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of column to append
- suffix:
str
Suffix to append to column name
- db:
- Outputs:
- newcol:
str
Prefixed name
- newcol:
- Versions:
2020-03-24
@ddalle
: Version 1.0
- append_data(dbsrc, cols=None, **kw)¶
Save one or more cols from another database
Note
This is the same as
link_data()
but with append defaulting toTrue
.- Call:
>>> db.append_data(dbsrc, cols=None)
- Inputs:
- db:
DataKit
Data container
- dbsrc:
dict
Additional data container, not required to be a datakit
- cols: {
None
} |list
[str
] List of columns to link (or dbsrc.cols)
- append: {
True
} |False
Option to append data (or replace it)
- prefix: {
None
} |str
Prefix applied to dbsrc col when saved in db
- suffix: {
None
} |str
Prefix applied to dbsrc col when saved in db
- db:
- Effects:
- db.cols:
list
[str
] Appends each col in cols where not present
- db[col]: dbsrc[col]
Reference to dbsrc data for each col
- db.cols:
- Versions:
2021-09-10
@ddalle
: Version 1.0
- apply_mask(mask, cols=None)¶
Apply a mask to one or more cols
- Call:
>>> db.apply_mask(mask, cols=None) >>> db.apply_mask(mask_index, cols=None)
- Inputs:
- db:
DataKit
Database with scalar output functions
- mask: {
None
} |np.ndarray
[bool
] Logical mask of
True
/False
values- mask_index:
np.ndarray
[int
] Indices of values to consider
- cols: {
None
} |list
[str
] List of columns to subset (default is all)
- db:
- Effects:
- db[col]:
list
|np.ndarray
Subset db[col][mask] or similar
- db[col]:
- Versions:
2021-09-10
@ddalle
: Version 1.0
- argsort(cols=None)¶
Get (ascending) sort order using list of cols
- Call:
>>> I = db.argsort(cols=None)
- Inputs:
- db:
DataKit
Data interface with response mechanisms
- cols: {
None
} |list
[str
] List of columns on which t sort, with highest sort priority to the first col, later cols used as tie-breakers
- db:
- Outputs:
- I:
np.ndarray
[int
] Ordering such that db[cols[0]][I] is ascending, etc.
- I:
- Versions:
2021-09-17
@ddalle
: Version 1.0
- assert_mask(mask, col=None, V=None)¶
Make sure that mask is a valid index/bool mask
- Call:
>>> db.assert_mask(mask, col=None, V=None) >>> db.assert_mask(mask_index, col=None, V=None)
- Inputs:
- db:
DataKit
Database with scalar output functions
- mask:
None
|np.ndarray
[bool
] Logical mask of
True
/False
values- mask_index:
np.ndarray
[int
] Indices of values to consider
- col: {
None
} |str
Column name to use to create default V
- V: {
None
} |np.ndarray
Array of values to test shape/values of mask
- db:
- Versions:
2020-04-21
@ddalle
: Version 1.0
- check_mask(mask, col=None, V=None)¶
Check if mask is a valid index/bool mask
- Call:
>>> q = db.check_mask(mask, col=None, V=None) >>> q = db.check_mask(mask_index, col=None, V=None)
- Inputs:
- db:
DataKit
Database with scalar output functions
- mask: {
None
} |np.ndarray
[bool
] Logical mask of
True
/False
values- mask_index:
np.ndarray
[int
] Indices of values to consider
- col: {
None
} |str
Column name to use to create default V
- V: {
None
} |np.ndarray
Array of values to test shape/values of mask
- db:
- Outputs:
- q:
True
|False
Whether or not mask is a valid mask
- q:
- Versions:
2020-04-21
@ddalle
: Version 1.0
- check_png_fig(png, fig)¶
Check if figure is in set of active figs for PNG tag
- Call:
>>> q = db.check_png_fig(png, fig)
- Inputs:
- db:
DataKit
Database with scalar output functions
- png:
str
Name/abbreviation/tag of PNG image to use
- fig:
None
|matplotlib.figure.Figure
Figure handle
- db:
- Outputs:
- q:
True
|False
Whether or not fig is in db.png_figs[png]
- q:
- Versions:
2020-04-01
@ddalle
: Version 1.0
- check_seam_fig(seam, fig)¶
Check if figure is in set of active figs for seam curve tag
- Call:
>>> q = db.check_seam_fig(seam, fig)
- Inputs:
- db:
DataKit
Database with scalar output functions
- seam:
str
Name used to tag this seam curve
- fig:
None
|matplotlib.figure.Figure
Figure handle
- db:
- Outputs:
- q:
True
|False
Whether or not fig is in db.seam_figs[seam]
- q:
- Versions:
2020-04-01
@ddalle
: Version 1.0
- clear_png_fig(png)¶
Reset the set of figures for PNG tag
- Call:
>>> db.clear_png_fig(png)
- Inputs:
- db:
DataKit
Database with scalar output functions
- png:
str
Name/abbreviation/tag of PNG image to use
- db:
- Effects:
- db.png_figs[png]:
set
Cleared to empty
set
- db.png_figs[png]:
- Versions:
2020-04-01
@ddalle
: Version 1.0
- clone_defns(defns, prefix='', _warnmode=0)¶
Copy a data store’s column definitions
- Call:
>>> db.clone_defns(defns, prefix="")
- Inputs:
- db:
DataKit
Data container
- defns:
dict
Dictionary of column definitions
- prefix: {
""
} |str
Prefix to append to key names in db.opts
- db:
- Effects:
- db.opts:
dict
Options merged with or copied from opts
- db.defns:
dict
Merged with
opts["Definitions"]
- db.opts:
- Versions:
2019-12-06
@ddalle
: Version 1.02019-12-26
@ddalle
: Added db.defns effect2020-02-13
@ddalle
: Split fromcopy_options()
2020-03-06
@ddalle
: Renamed fromcopy_defns()
- clone_options(opts, prefix='')¶
Copy a database’s options
- Call:
>>> db.clone_options(opts, prefix="")
- Inputs:
- db:
DataKit
Data container
- opts:
dict
Options dictionary
- prefix: {
""
} |str
Prefix to append to key names in db.opts
- db:
- Effects:
- db.opts:
dict
Options merged with or copied from opts
- db.defns:
dict
Merged with
opts["Definitions"]
- db.opts:
- Versions:
2019-12-06
@ddalle
: Version 1.02019-12-26
@ddalle
: Added db.defns effect2020-02-10
@ddalle
: Removed db.defns effect2020-03-06
@ddalle
: Renamed fromcopy_options()
- copy()¶
Make a copy of a database class
Each database class may need its own version of this class
- copy_DataKit(dbcopy)¶
Copy attributes and data relevant to null-response DB
- copy__dict__(dbtarg, skip=[])¶
Copy all attributes except for specified list
- Call:
>>> db.copy__dict__(dbtarg, skip=[])
- Inputs:
- Effects:
getattr(dbtarg, k)
:getattr(db, k, vdef)
Shallow copy of attribute from DBc or vdef if necessary
- Versions:
2019-12-04
@ddalle
: Version 1.0
- copyattr(dbtarg, k, vdef={})¶
Make an appropriate copy of an attribute if present
- Call:
>>> db.copyattr(dbtarg, k, vdef={})
- Inputs:
- Effects:
getattr(dbtarg, k)
:getattr(db, k, vdef)
Shallow copy of attribute from DBc or vdef if necessary
- Versions:
2018-06-08
@ddalle
: Version 1.02019-12-04
@ddalle
: Copied fromDBCoeff
- copyitem(v)¶
Return a copy of appropriate depth following class rules
- Call:
>>> vcopy = db.copyitem(v)
- Inputs:
- db:
DataKit
Generic database
- v:
any
Variable to be copied
- db:
- Outputs:
- vcopy: v.__class__
Copy of v (shallow or deep)
- Versions:
2019-12-04
@ddalle
: Version 1.0
- create_arg_alternates(col, extracols=None)¶
Create set of keys that might be used as kwargs to col
- Call:
>>> db.create_arg_alternates(col, extracols=None)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of data column with response method
- extracols: {
None
} |set
|list
Additional col names that might be used as kwargs
- db:
- Effects:
- db.respone_arg_alternates[col]:
set
Cols that are used by response for col
- db.respone_arg_alternates[col]:
- Versions:
2020-04-24
@ddalle
: Version 1.0
- create_bkpts(cols, nmin=5, tol=1e-12, tols={}, mask=None)¶
Create automatic list of break points for interpolation
- Call:
>>> db.create_bkpts(col, nmin=5, tol=1e-12, **kw) >>> db.create_bkpts(cols, nmin=5, tol=1e-12, **kw)
- Inputs:
- db:
DataKit
Data container
- col:
str
Individual lookup variable
- cols:
list
[str
] List of lookup variables
- nmin: {
5
} |int
> 0 Minimum number of data points at one value of a key
- tol: {
1e-12
} |float
>= 0 Tolerance for values considered to be equal
- tols: {
{}
} |dict
[float
] Tolerances for specific cols
- mask:
np.ndarray
[bool
|int
] Mask of which database indices to consider
- db:
- Outputs:
- db.bkpts:
dict
Dictionary of 1D unique lookup values
- db.bkpts[col]:
np.ndarray
|list
Unique values of DBc[col] with at least nmin entries
- db.bkpts:
- Versions:
2018-06-08
@ddalle
: Version 1.02019-12-16
@ddalle
: Updated forrdbnull
2020-03-26
@ddalle
: Renamed,get_bkpts()
2020-05-06
@ddalle
: Moved much togenr8_bkpts()
- create_bkpts_map(cols, scol, tol=1e-12)¶
Map break points of one column to one or more others
The most common purpose to use this method is to create non-ascending break points. One common example is to keep track of the dynamic pressure values at each Mach number. These dynamic pressures may be unique, but sorting them by dynamic pressure is different from the order in which they occur in flight.
- Call:
>>> db.create_bkpts_map(cols, scol, tol=1e-12)
- Inputs:
- db:
DataKit
Data container
- cols:
list
[str
] Individual lookup variable
- scol:
str
Name of key to drive map/schedule
- tol: {
1e-12
} |float
>= 0 Tolerance cutoff (used for scol)
- db:
- Outputs:
- DBc.bkpts:
dict
Dictionary of 1D unique lookup values
- DBc.bkpts[key]:
np.ndarray
[float
] Unique values of DBc[key] with at least nmin entries
- DBc.bkpts:
- Versions:
2018-06-29
@ddalle
: Version 1.02019-12-16
@ddalle
: Ported tordbnull
2020-03-26
@ddalle
: Renamed,map_bkpts()
- create_bkpts_schedule(cols, scol, nmin=5, tol=1e-12)¶
Create lists of unique values at each unique value of scol
This function creates a break point list of the unique values of each col in cols at each unique value of a “scheduling” column scol. For example, if a different run matrix of alpha and beta is used at each mach number, this function creates a list of the unique alpha and beta values for each Mach number in db.bkpts[“mach”].
- Call:
>>> db.create_bkpts_schedule(cols, scol)
- Inputs:
- db:
DataKit
Data container
- cols:
list
[str
] Individual lookup variable
- scol:
str
Name of key to drive map/schedule
- nmin: {
5
} |int
> 0 Minimum number of data points at one value of a key
- tol: {
1e-12
} |float
>= 0 Tolerance cutoff
- db:
- Outputs:
- db.bkpts:
dict
Dictionary of unique lookup values
- db.bkpts[col]:
list
[np.ndarray
] Unique values of db[col] at each value of scol
- db.bkpts:
- Versions:
2018-06-29
@ddalle
: Version 1.02019-12-16
@ddalle
: Ported tordbnull
2020-03-26
@ddalle
: Renamed,schedule_bkpts()
- create_global_rbfs(cols, args, I=None, **kw)¶
Create global radial basis functions for one or more columns
- Call:
>>> db.create_global_rbfs(cols, args, I=None)
- Inputs:
- db:
DataKit
Database with scalar output functions
- cols:
list
[str
] List of columns to create RBFs for
- args:
list
[str
] List of (ordered) input keys, default is from db.bkpts
- I: {
None
} |np.ndarray
Indices of cases to include in RBF (default is all)
- function: {
"cubic"
} |str
Radial basis function type
- smooth: {
0.0
} |float
>= 0 Smoothing factor,
0.0
for exact interpolation
- db:
- Effects:
- db.rbf[col]:
scipy.interpolate.rbf.Rbf
Radial basis function for each col in cols
- db.rbf[col]:
- Versions:
2019-01-01
@ddalle
: Version 1.02019-12-17
@ddalle
: Ported fromtnakit
2020-02-22
@ddalle
: Utilizecreate_rbf()
- create_integral(col, xcol=None, ocol=None, **kw)¶
Integrate the columns of a 2D data col
- Call:
>>> y = db.create_integral(col, xcol=None, ocol=None, **kw)
- Inputs:
- db:
DataKit
Database with analysis tools
- col:
str
Name of data column to integrate
- xcol: {
None
} |str
Name of column to use as x-coords for integration
- ocol: {
col[1:]
} |str
Name of col to store result in
- mask:
np.ndarray
[bool
|int
] Mask or indices of which cases to integrate
- x: {
None
} |np.ndarray
Optional 1D or 2D x-coordinates directly specified
- dx: {
1.0
} |float
Uniform spacing to use if xcol and x are not used
- method: {
"trapz"
} |"left"
|"right"
| callable Integration method or callable function taking two args like
np.trapz()
- db:
- Outputs:
- y:
np.ndarray
1D array of integral of each column of db[col]
- y:
- Versions:
2020-03-24
@ddalle
: Version 1.02020-06-02
@ddalle
: Added mask, callable method
- create_rbf_cols(col, **kw)¶
Generate data to describe existing RBF(s) for col
This saves various properties extracted from db.rbf[col] directly as additional columns in db. These values can then be used but
infer_rbf()
to reconstruct a SciPy radial basis function response mechanism without re-solving the original linear system of equations that trains the RBF weights.- Call:
>>> db.create_rbf_cols(col, **kw)
- Inputs:
- db:
DataKit
DataKit with db.rbf[col] defined
- col:
str
Name of column whose RBF will be analyzed
- expand:
True
| {False
} Repeat properties like eps for each node of RBF (for uniform data size, usually to write to CSV file)
- db:
- Effects:
- db[col]:
np.ndarray
[float
] Values of col used in RBF
- db[col+”_method”]:
np.ndarray
[int
] Response method index:
4
:"rbf"
5
:"rbf-map"
6
:"rbf-schedule"
- db[col+”_rbf”]:
np.ndarray
[float
] Weight for each RBF node
- db[col+”_func”]:
np.ndarray
[int
] RBF basis function index:
0
:"multiquadric"
1
:"inverse_multiquadric"
2
:"gaussian"
3
:"linear"
4
:"cubic"
5
:"quintic"
6
:"thin_plate"
- db[col+”_eps”]:
np.ndarray
[float
] Epsilon scaling factor for (each) RBF
- db[col+”_smooth”]:
np.ndarray
[float
] Smoothing factor for (each) RBF
- db[col+”_N”]:
np.ndarray
[float
] Number of nodes in (each) RBF
- db[col+”_xcols”]:
list
[str
] List of arguments for col
- db.response_args[col]:
list
[str
] List of arguments for col
- db[col]:
- Versions:
2021-09-16
@ddalle
: Version 1.0
- create_rbf_from_db(dbf)¶
Create RBF response from data object
- Call:
>>> db.create_rbf_from_db(dbf)
- Inputs:
- db:
DataKit
Data container with responses
- dbf:
dict
|BaseData
Raw data container
- db:
- Versions:
2019-07-24
@ddalle
: Version 1.0;ReadRBFCSV()
2021-06-07
@ddalle
: Version 2.02021-09-14
@ddalle
: Version 2.1; bug fix/testing
- create_rbfs_cols(cols, **kw)¶
Save data to describe multiple existing RBFs
- Call:
>>> db.create_rbfs_cols(cols, **kw)
- Inputs:
- db:
DataKit
DataKit with db.rbf[col] defined
- cols:
str
Name of columns whose RBFs will be archived
- expand:
True
| {False
} Repeat properties like eps for each node of RBF (for uniform data size, usually to write to CSV file)
- db:
- See Also:
- Versions:
2021-09-16
@ddalle
: Version 1.0
- create_slice_rbfs(cols, args, I=None, **kw)¶
Create radial basis functions for each slice of args[0]
The first entry in args is interpreted as a “slice” key; RBFs will be constructed at constant values of args[0].
- Call:
>>> db.create_slice_rbfs(coeffs, args, I=None)
- Inputs:
- db:
DataKit
Database with scalar output functions
- cols:
list
[str
] List of columns to create RBFs for
- args:
list
[str
] List of (ordered) input keys, default is from db.bkpts
- I: {
None
} |np.ndarray
Indices of cases to include in RBF (default is all)
- function: {
"cubic"
} |str
Radial basis function type
- smooth: {
0.0
} |float
>= 0 Smoothing factor,
0.0
for exact interpolation
- db:
- Effects:
- db.rbf[col]:
list
[scirbf.Rbf
] List of RBFs at each slice for each col in cols
- db.rbf[col]:
- Versions:
2019-01-01
@ddalle
: Version 1.02019-12-17
@ddalle
: Ported fromtnakit
- est_cov_interval(dbt, col, mask=None, cov=0.95, **kw)¶
Calculate Student’s t-distribution confidence region
If the nominal application of the Student’s t-distribution fails to cover a high enough fraction of the data, the bounds are extended until the data is covered.
- Call:
>>> a, b = db.est_cov_interval(dbt, col, mask, cov, **kw)
- Inputs:
- db:
DataKit
Data kit with response surfaces
- dbt:
dict
|DataKit
Target data set
- mask:
np.ndarray
[bool
|int
] Subset of db to consider
- maskt:
np.ndarray
[bool
|int
] Subset of dbt to consider
- cov: {
0.95
} | 0 <float
< 1 Coverage percentage
- cdf, CoverageCDF: {cov} | 0 <
float
< 1 CDF if no extra coverage needed
- osig, OutlierSigma: {
1.5*ksig
} |float
Multiple of standard deviation to identify outliers; default is 150% of the nominal coverage calculated using t-distribution
- searchcols: {
None
} |list
[str
] List of cols to use for finding matches; default is all
float
cols of db- tol: {
1e-8
} |float
Default tolerance for matching conditions
- tols:
dict
[float
] Dict of tolerances for specific columns during search
- db:
- Outputs:
- a:
float
Lower bound of coverage interval
- b:
float
Upper bound of coverage intervalregion
- a:
- Versions:
2018-09-28
@ddalle
: Version 1.02020-02-21
@ddalle
: Rewritten fromcape.attdb.fm
- est_range(dbt, col, mask=None, cov=0.95, **kw)¶
Calculate Student’s t-distribution confidence range
If the nominal application of the Student’s t-distribution fails to cover a high enough fraction of the data, the bounds are extended until the data is covered.
- Call:
>>> r = db.est_range(dbt, col, mask, cov, **kw)
- Inputs:
- db:
DataKit
Data kit with response surfaces
- dbt:
dict
|DataKit
Target data set
- mask:
np.ndarray
[bool
|int
] Subset of db to consider
- maskt:
np.ndarray
[bool
|int
] Subset of dbt to consider
- cov: {
0.95
} | 0 <float
< 1 Coverage percentage
- cdf, CoverageCDF: {cov} | 0 <
float
< 1 CDF if no extra coverage needed
- osig, OutlierSigma: {
1.5*ksig
} |float
Multiple of standard deviation to identify outliers; default is 150% of the nominal coverage calculated using t-distribution
- searchcols: {
None
} |list
[str
] List of cols to use for finding matches; default is all
float
cols of db- tol: {
1e-8
} |float
Default tolerance for matching conditions
- tols:
dict
[float
] Dict of tolerances for specific columns during search
- db:
- Outputs:
- r:
float
Half-width of coverage range
- r:
- Versins:
2018-09-28
@ddalle
: Version 1.02020-02-21
@ddalle
: Rewritten fromcape.attdb.fm
- est_uq_col(db2, col, ucol, **kw)¶
Quantify uncertainty interval for all points of one ucol
- Call:
>>> A, U = db1.est_uq_col(db2, col, ucol, **kw)
- Inputs:
- Keyword Arguments:
- nmin: {
30
} |int
> 0 Minimum number of points in window
- cov, Coverage: {
0.99865
} | 0 <float
< 1 Fraction of data that must be covered by UQ term
- cdf, CoverageCDF: {cov} | 0 <
float
< 1 Coverage fraction assuming perfect distribution
- test_values: {
{}
} |dict
Candidate values of each response_arg for comparison
- test_bkpts: {
{}
} |dict
Candidate break points (1D unique) for response_args
- nmin: {
- Required Attributes:
- db1.response_args[col]:
list
[str
] List of args to evaluate col
- db1.response_args[ucol]:
list
[str
] List of args to evaluate ucol
- db1.uq_ecols[ucol]: {
[]
} |list
List of extra UQ cols related to ucol
- db1.uq_acols[ucol]: {
[]
} |list
Aux cols whose deltas are used to estimate ucol
- db1.uq_efuncs: {
{}
} |dict
[callable] Function to calculate any uq_ecols
- db1.uq_afuncs: {
{}
} |dict
[callable] Function to use aux cols when estimating ucol
- db1.response_args[col]:
- Outputs:
- A:
np.ndarray
size=(nx,*na*) Conditions for each ucol window, for nx windows, each with na values (length of db1.response_args[ucol])
- U:
np.ndarray
size=(nx,*nu*+1) Values of ucol and any nu “extra” uq_ecols for each window
- A:
- Versions:
2019-02-15
@ddalle
: Version 1.02020-04-02
@ddalle
: v2.0, fromEstimateUQ_coeff()
- est_uq_db(db2, cols=None, **kw)¶
Quantify uncertainty for all col, ucol pairings in DB
- Call:
>>> db1.est_uq_db(db2, cols=None, **kw)
- Inputs:
- Keyword Arguments:
- nmin: {
30
} |int
> 0 Minimum number of points in window
- cov, Coverage: {
0.99865
} | 0 <float
< 1 Fraction of data that must be covered by UQ term
- cdf, CoverageCDF: {cov} | 0 <
float
< 1 Coverage fraction assuming perfect distribution
- test_values: {
{}
} |dict
Candidate values of each col for comparison
- test_bkpts: {
{}
} |dict
Candidate break points (1D unique) for col
- nmin: {
- Required Attributes:
- db1.uq_cols:
dict
[list
] Names of UQ col for each col, if any
- db1.response_args[col]:
list
[str
] List of args to evaluate col
- db1.response_args[ucol]:
list
[str
] List of args to evaluate ucol
- db1.uq_ecols[ucol]: {
[]
} |list
List of extra UQ cols related to ucol
- db1.uq_acols[ucol]: {
[]
} |list
Aux cols whose deltas are used to estimate ucol
- db1.uq_efuncs: {
{}
} |dict
[callable] Function to calculate any uq_ecols
- db1.uq_afuncs: {
{}
} |dict
[callable] Function to use aux cols when estimating ucol
- db1.uq_cols:
- Versions:
2019-02-15
@ddalle
: Version 1.0- 2020-04-02
@ddalle
: Version 2.0 was
EstimateUQ_DB()
- 2020-04-02
- est_uq_point(db2, col, ucol, *a, **kw)¶
Quantify uncertainty interval for a single point or window
- Call:
>>> u, U = db1.est_uq_point(db2, col, ucol, *a, **kw)
- Inputs:
- Keyword Arguments:
- nmin: {
30
} |int
> 0 Minimum number of points in window
- cov, Coverage: {
0.99865
} | 0 <float
< 1 Fraction of data that must be covered by UQ term
- cdf, CoverageCDF: {cov} | 0 <
float
< 1 Coverage fraction assuming perfect distribution
- test_values: {
{}
} |dict
Candidate values of each response_args for comparison
- test_bkpts: {
{}
} |dict
Candidate break points (1D unique) for response_args
- nmin: {
- Required Attributes:
- db1.response_args[col]:
list
[str
] List of args to evaluate col
- db1.response_args[ucol]:
list
[str
] List of args to evaluate ucol
- db1.uq_ecols[ucol]: {
[]
} |list
List of extra UQ cols related to ucol
- db1.uq_acols[ucol]: {
[]
} |list
Aux cols whose deltas are used to estimate ucol
- db1.uq_efuncs: {
{}
} |dict
[callable] Function to calculate any uq_ecols
- db1.uq_afuncs: {
{}
} |dict
[callable] Function to use aux cols when estimating ucol
- db1.response_args[col]:
- Outputs:
- u:
float
Single uncertainty estimate for generated window
- U:
tuple
[float
] Values of any “extra” uq_ecols
- u:
- Versions:
2019-02-15
@ddalle
: Version 1.02020-04-02
@ddalle
: Second version
- filter_repeats(args, cols=None, **kw)¶
Remove duplicate points or close neighbors
- Call:
>>> db.filter_repeats(args, cols=None, **kw)
- Inputs:
- db:
DataKit
Data container
- args:
list
[str
] List of columns names to match
- cols: {
None
} |list
[str
] Columns to filter (default is all db.cols with correct size and not in args and
float
type)- mask:
np.ndarray
[bool
|int
] Subset of db to consider
- function: {
"mean"
} | callable Function to use for filtering
- translators:
dict
[str
] Alternate names; col -> trans[col]
- prefix:
str
|dict
Universal prefix or col-specific prefixes
- suffix:
str
|dict
Universal suffix or col-specific suffixes
- kw:
dict
Additional values to use for evaluation in
find()
- db:
- Versions:
2020-05-05
@ddalle
: Version 1.0
- find(args, *a, **kw)¶
Find cases that match a condition [within a tolerance]
- Call:
>>> I, J = db.find(args, *a, **kw) >>> Imap, J = db.find(args, *a, **kw)
- Inputs:
- db:
DataKit
Data container
- args:
list
[str
] List of columns names to match
- a:
tuple
[float
] Values of the arguments
- gtcons, GreaterThanCons: {
{}
} |dict
Dictionary of greater-than cons, e.g
{"mach": 1.0}
to applydb["mach"] > 1.0
- gtecons, GreaterThanEqualCons: {
{}
} |dict
Dict of greater-than-or-equal-to constraints
- ltcons, LessThanCons: {
{}
} |dict
Dict of less-than constraints
- ltecons, LessThanEqualCons: {
{}
} |dict
Dict of less-than-or-equal-to constraints
- mask:
np.ndarray
[bool
|int
] Subset of db to consider
- tol: {
1e-4
} |float
>= 0 Default tolerance for all args
- tols: {
{}
} |dict
[float
>= 0] Dictionary of tolerances specific to arguments
- once:
True
| {False
} Option to find max of one db index per test point
- mapped:
True
| {False
} Option to switch output to Imap (overrides once)
- kw:
dict
Additional values to use during evaluation
- db:
- Outputs:
- I:
np.ndarray
[int
] Indices of cases in db that match conditions
- J:
np.ndarray
[int
] Indices of (a, kw) that have a match in db
- Imap:
list
[np.ndarray
] List of db indices for each test point in J
- I:
- Versions:
2019-03-11
@ddalle
: Version 1.0 (DBCoeff
)2019-12-26
@ddalle
: Version 1.02020-02-20
@ddalle
: Version 2.0; mask, once kwargs2022-09-15
@ddalle
: Version 3.0; gtcons, etc.
- find_repeats(cols, **kw)¶
Find repeats based on list of columns
- Call:
>>> repeats = db.find_repeats(cols, **kw)
- Inputs:
- db:
DataKit
Data container
- cols:
list
[str
] List of columns names to match
- mask:
np.ndarray
[bool
|int
] Subset of db to consider
- tol: {
1e-4
} |float
>= 0 Default tolerance for all args
- tols: {
{}
} |dict
[float
>= 0] Dictionary of tolerances specific to arguments
- kw:
dict
Additional values to use during evaluation
- db:
- Outputs:
- repeats:
list
[np.ndarray
] List of db indices of repeats; each repeat in repeats is an index of a case that matches for each col in cols
- repeats:
- Versions:
2021-09-10
@ddalle
: Version 1.0
- genr8_bkpts(col, nmin=5, tol=1e-12, mask=None)¶
Generate list of unique values for one col
- Call:
>>> B = db.genr8_bkpts(col, nmin=5, tol=1e-12, mask=None)
- Inputs:
- db:
DataKit
Data container
- col:
str
Individual lookup variable
- nmin: {
5
} |int
> 0 Minimum number of data points at one value of a key
- tol: {
1e-12
} |float
>= 0 Tolerance for values considered to be equal
- mask:
np.ndarray
[bool
|int
] Mask of which database indices to consider
- db:
- Outputs:
- B:
np.ndarray
|list
Unique values of DBc[col] with at least nmin entries
- B:
- Versions:
2020-05-06
@ddalle
: Version 1.0
- genr8_griddata_weights(args, *a, **kw)¶
Generate interpolation weights for
griddata()
- Call:
>>> W = db.genr8_griddata_weights(args, *a, **kw)
- Inputs:
- db:
DataKit
Data container
- args:
list
[str
] List of arguments
- a:
tuple
[np.ndarray
] Test values at which to interpolate
- mask:
np.ndarray
[bool
] Mask of which database indices to consider
- I:
np.ndarray
[int
] Database indices to consider
- method: {
"linear"
} |"cubic"
|"nearest"
Interpolation method;
"cubic"
only for 1D or 2D- rescale:
True
| {False
} Rescale input points to unit cube before interpolation
- db:
- Outputs:
- W:
np.ndarray
[float
] Interpolation weights; same size as test points a
- W:
- Versions:
2020-03-10
@ddalle
: Version 1.0
- genr8_integral(col, xcol=None, **kw)¶
Integrate the columns of a 2D data col
- Call:
>>> y = db.genr8_integral(col, xcol=None, **kw)
- Inputs:
- db:
DataKit
Database with analysis tools
- col:
str
Name of data column to integrate
- xcol: {
None
} |str
Name of column to use as x-coords for integration
- mask:
np.ndarray
[bool
|int
] Mask or indices of which cases to integrate
- x: {
None
} |np.ndarray
Optional 1D or 2D x-coordinates directly specified
- dx: {
1.0
} |float
Uniform spacing to use if xcol and x are not used
- method: {
"trapz"
} |"left"
|"right"
| callable Integration method or callable function taking two args like
np.trapz()
- db:
- Outputs:
- y:
np.ndarray
1D array of integral of each column of db[col]
- y:
- Versions:
2020-03-24
@ddalle
: Version 1.02020-06-02
@ddalle
: Added mask, callable method2020-06-04
@ddalle
: Split_genr8_integral()
- genr8_rbf(col, args, I=None, **kw)¶
Create global radial basis functions for one or more columns
- Call:
>>> rbf = db.genr8_rbf(col, args, I=None)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
list
[str
] Data column to create RBF for
- args:
list
[str
] List of (ordered) input cols
- I: {
None
} |np.ndarray
Indices of cases to include in RBF (default is all)
- function: {
"cubic"
} |str
Radial basis function type
- smooth: {
0.0
} |float
>= 0 Smoothing factor,
0.0
for exact interpolation
- db:
- Output:
- rbf:
scipy.interpolate.rbf.Rbf
Radial basis function for col
- rbf:
- Versions:
2019-01-01
@ddalle
: Version 1.02019-12-17
@ddalle
: Ported fromtnakit
2020-02-22
@ddalle
: Single-col version2020-03-06
@ddalle
: Name fromcreate_rbf()
- genr8_rbf_cols(col, **kw)¶
Generate data to describe existing RBF(s) for col
This creates a
dict
of various properties that are used by the radial basis function (or list thereof) within db.rbf. It is possible to recreate an RBF(s) with only this information, thus avoiding the need to retrain the RBF network(s).- Call:
>>> vals = db.genr8_rbf_cols(col, **kw)
- Inputs:
- db:
DataKit
DataKit with db.rbf[col] defined
- col:
str
Name of column whose RBF will be analyzed
- expand:
True
| {False
} Repeat properties like eps for each node of RBF (for uniform data size, usually to write to CSV file)
- db:
- Outputs:
- vals:
dict
[np.ndarray
] Data used in db.rbf[col]
- vals[col]:
np.ndarray
[float
] Values of col used in RBF (may differ from db[col])
- vals[col+”_method”]:
np.ndarray
[int
] Response method index:
4
:"rbf"
5
:"rbf-map"
6
:"rbf-schedule"
- vals[col+”_rbf”]:
np.ndarray
[float
] Weight for each RBF node
- vals[col+”_func”]:
np.ndarray
[int
] RBF basis function index:
0
:"multiquadric"
1
:"inverse_multiquadric"
2
:"gaussian"
3
:"linear"
4
:"cubic"
5
:"quintic"
6
:"thin_plate"
- vals[col+”_eps”]:
np.ndarray
[float
] Epsilon scaling factor for (each) RBF
- vals[col+”_smooth”]:
np.ndarray
[float
] Smoothing factor for (each) RBF
- vals[col+”_N”]:
np.ndarray
[float
] Number of nodes in (each) RBF
- vals[col+”_x0”]:
np.ndarray
[float
] Values of first response arg if db.rbf[col] is a list
- vals[col+”_X”]:
np.ndarray
[float
] 2D matrix of node locations for (each) RBF
- vals[col+”_x.<xcol>”]:
np.ndarray
1D array of node location values for each response arg
- vals[col+”_xcols”]:
list
[str
] List of arguments for col
- vals:
- Versions:
2021-09-15
@ddalle
: Version 1.0
- genr8_rdiff(db2, cols, **kw)¶
Generate deltas between responses of two databases
- Call:
>>> ddb = db.genr8_rdiff(db2, col, **kw)
- Inputs:
- Outputs:
- ddb: db.__class__
New database with filtered db and db2 diffs
- ddb[arg]:
np.ndarray
Test values for each arg in col response args
- ddb[col]:
np.ndarray
Smoothed difference between db2 and db
- Versions:
2020-05-08
@ddalle
: Version 1.0
- genr8_rdiff_by_rbf(db2, cols, scol=None, **kw)¶
Generate smoothed deltas between two responses
- Call:
>>> ddb = db.genr8_rdiff_by_rbf(db2, cols, scol, **kw)
- Inputs:
- db:
DataKit
Data container
- db2:
DataKit
Second data container
- cols:
list
[str
] Data columns to analyze
- scol: {
None
} |str
|list
List of arguments to define slices on which to smooth
- smooth: {
0
} |float
>= 0 Smoothing parameter for interpolation on slices
- function: {
"multiquadric"
} |str
RBF basis function type, see
scirbf.Rbf()
- test_values: {
db
} |dict
Candidate values of each arg for differencing
- test_bkpts: {
None
} |dict
Candidate break points (1D unique values) to override test_values. Used to create full-factorial matrix.
- tol: {
1e-4
} |float
> 0 Default tolerance for matching slice constraints
- tols: {
{}
} |dict
(float
>= 0) Specific tolerance for particular slice keys
- v, verbose:
True
| {False
} Verbose STDOUT flag
- db:
- Outputs:
- ddb: db.__class__
New database with filtered db and db2 diffs
- ddb[arg]:
np.ndarray
Test values for each arg in col response args
- ddb[col]:
np.ndarray
Smoothed difference between db2 and db
- ddb._slices:
list
[np.ndarray
] Saved lists of indices on which smoothing is performed
- Versions:
2020-05-08
@ddalle
: Fork fromDBCoeff.DiffDB()
- genr8_source(ext, cls, cols=None, **kw)¶
Create a new source file interface
- Call:
>>> dbf = db.genr8_source(ext, cls) >>> dbf = db.genr8_source(ext, cls, cols=None, **kw)
- Inputs:
- db:
DataKit
Generic database
- ext:
str
Source type, by extension, to retrieve
- cls:
type
Subclass of
BaseFile
to create (if needed)- cols: {db.cols} |
list
[str
] List of data columns to include in dbf
- attrs: {
None
} |list
[str
] Extra attributes of db to save for
.mat
files
- db:
- Outputs:
- dbf:
cape.attdb.ftypes.basefile.BaseFile
Data file interface
- dbf:
- Versions:
2020-03-06
@ddalle
: Split frommake_source()
- genr8_sweeps(args, **kw)¶
Divide data into sweeps with constant values of some cols
- Call:
>>> sweeps = db.genr8_sweeps(args, **kw)
- Inputs:
- db:
DataKit
Data container
- args:
list
[str
] List of columns names to match
- mask:
np.ndarray
[bool
|int
] Subset of db to consider
- tol: {
1e-4
} |float
>= 0 Default tolerance for all args
- tols: {
{}
} |dict
[float
>= 0] Dictionary of tolerances specific to arguments
- kw:
dict
Additional values to use during evaluation
- db:
- Outputs:
- sweeps:
list
[np.ndarray
] Indices of entries with constant (within tol) values of each arg
- sweeps:
- Versions:
2020-05-06
@ddalle
: Version 1.0
- genr8_udiff_by_rbf(db2, cols, scol=None, **kw)¶
Generate increment and UQ estimate between two responses
- Call:
>>> ddb = db.genr8_udiff_by_rbf(db2, cols, scol=None, **kw)
- Inputs:
- db:
DataKit
Data container
- db2:
DataKit
Second data container
- cols:
list
[str
] Data columns to analyze
- scol: {
None
} |str
|list
List of arguments to define slices on which to smooth
- smooth: {
0
} |float
>= 0 Smoothing parameter for interpolation on slices
- function: {
"multiquadric"
} |str
RBF basis function type, see
scirbf.Rbf()
- test_values: {
db
} |dict
Candidate values of each arg for differencing
- test_bkpts: {
None
} |dict
Candidate break points (1D unique values) to override test_values. Used to create full-factorial matrix.
- tol: {
1e-4
} |float
> 0 Default tolerance for matching slice constraints
- tols: {
{}
} |dict
(float
>= 0) Specific tolerance for particular slice keys
- db:
- Outputs:
- ddb: db.__class__
New database with filtered db and db2 diffs
- ddb[arg]:
np.ndarray
Test values for each arg in col response args
- ddb[col]:
np.ndarray
Smoothed difference between db2 and db
- ddb._slices:
list
[np.ndarray
] Saved lists of indices on which smoothing is performed
- Versions:
2020-05-08
@ddalle
: Version 1.0
- genr8_window(n, args, *a, **kw)¶
Get indices of neighboring points
This function creates a moving “window” for averaging or for performing other statistics (especially estimating difference between two databases).
- Call:
>>> I = db.genr8_window(n, args, *a, **kw)
- Inputs:
- db:
DataKit
Database with evaluation tools
- n:
int
Minimum number of points in window
- args:
list
[str
] List of arguments to use for windowing
- a[0]:
float
Value of the first argument
- a[1]:
float
Value of the second argument
- db:
- Keyword Arguments:
- test_values: {db} |
DBCoeff
|dict
Specify values of each arg in args that are the candidate points for the window; default is from db
- test_bkpts: {db.bkpts} |
dict
Specify candidate window boundaries; must be ascending array of unique values for each arg
- test_values: {db} |
- Outputs:
- I:
np.ndarray
Indices of cases (relative to test_values) in window
- I:
- Versions:
2019-02-13
@ddalle
: Version 1.02020-04-01
@ddalle
: Modified fromtnakit.db
- get_all_values(col)¶
Attempt to get all values of a specified argument
This will use db.response_arg_converters if possible.
- Call:
>>> V = db.get_all_values(col)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of data column
- db:
- Outputs:
- V:
None
|np.ndarray
[float
] db[col] if available, otherwise an attempt to apply db.response_arg_converters[col]
- V:
- Versions:
2019-03-11
@ddalle
: Version 1.02019-12-18
@ddalle
: Ported fromtnakit
- get_arg_alternates(col)¶
Get
set
of usable keyword args for col- Call:
>>> altcols = db.get_arg_alternates(col)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of data column with response method
- db:
- Outputs:
- altcols:
set
[str
Cols that are used by response for col
- altcols:
- Versions:
2020-04-24
@ddalle
: Version 1.0
- get_arg_value(i, k, *a, **kw)¶
Get the value of the ith argument to a function
- Call:
>>> v = db.get_arg_value(i, k, *a, **kw)
- Inputs:
- db:
DataKit
Database with scalar output functions
- i:
int
Argument index within db.response_args
- k:
str
Name of evaluation argument
- a:
tuple
Arguments to
__call__()
- kw:
dict
Keyword arguments to
__call__()
- db:
- Outputs:
- v:
float
|np.ndarray
Value of the argument, possibly converted
- v:
- Versions:
2019-02-28
@ddalle
: Version 1.02019-12-18
@ddalle
: Ported fromtnakit
- get_arg_value_dict(*a, **kw)¶
Return a dictionary of normalized argument variables
Specifically, the dictionary contains a key for every argument used to evaluate the coefficient that is either the first argument or uses the keyword argument col.
- Call:
>>> X = db.get_arg_value_dict(*a, **kw) >>> X = db.get_arg_value_dict(col, x1, x2, ..., k3=x3)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of data column
- x1:
float
|np.ndarray
Value(s) of first argument
- x2:
float
|np.ndarray
Value(s) of second argument, if applicable
- k3:
str
Name of third argument or optional variant
- x3:
float
|np.ndarray
Value(s) of argument k3, if applicable
- db:
- Outputs:
- X:
dict
[np.ndarray
] Dictionary of values for each key used to evaluate col according to b.response_args[col]; each entry of X will have the same size
- X:
- Versions:
2019-03-12
@ddalle
: Version 1.02019-12-18
@ddalle
: Ported fromtnakit
- get_bkpt(col, *I)¶
Extract a breakpoint by index, with error checking
- Call:
>>> v = db.get_bkpt(col, *I) >>> v = db.get_bkpt(col) >>> v = db.get_bkpt(col, i) >>> v = db.get_bkpt(col, i, j) >>> v = db.get_bkpt(col, i, j, ...)
- Inputs:
- db:
DataKit
Data container
- col:
str
Individual lookup variable from db.bkpts
- I:
tuple
Tuple of lookup indices
- i:
int
(Optional) first break point list index
- j:
int
(Optional) second break point list index
- db:
- Outputs:
- v:
float
|np.ndarray
Break point or array of break points
- v:
- Versions:
2018-12-31
@ddalle
: Version 1.02019-12-16
@ddalle
: Updated forrdbnull
- get_bkpt_index(col, v, tol=1e-08)¶
Get interpolation weights for 1D linear interpolation
- Call:
>>> i0, i1, f = db.get_bkpt_index(k, v, tol=1e-8)
- Inputs:
- db:
DataKit
Data container
- col:
str
Individual lookup variable from db.bkpts
- v:
float
Value at which to lookup
- tol: {
1e-8
} |float
>= 0 Tolerance for left and right bounds
- db:
- Outputs:
- i0:
None
|int
Lower bound index, if
None
, extrapolation below- i1:
None
|int
Upper bound index, if
None
, extrapolation above- f: 0 <=
float
<= 1 Lookup fraction,
1.0
if v is equal to upper bound
- i0:
- Versions:
2018-12-30
@ddalle
: Version 1.02019-12-16
@ddalle
: Updated forrdbnull
- get_bkpt_index_schedule(k, v, j)¶
Get weights 1D interpolation of k at a slice of master key
- Call:
>>> i0, i1, f = db.get_bkpt_index_schedule(k, v, j)
- Inputs:
- db:
DataKit
Data container
- k:
str
Name of trajectory key in FM.bkpts for lookup
- v:
float
Value at which to lookup
- j:
int
Index of master “slice” key, if k has scheduled break points
- db:
- Outputs:
- i0:
None
|int
Lower bound index, if
None
, extrapolation below- i1:
None
|int
Upper bound index, if
None
, extrapolation above- f: 0 <=
float
<= 1 Lookup fraction,
1.0
if v is equal to upper bound
- i0:
- Versions:
2018-04-19
@ddalle
: Version 1.0
- get_col(k=None, defnames=[], **kw)¶
Process a key name, using an ordered list of defaults
- Call:
>>> col = db.get_key(k=None, defnames=[], **kw)
- Inputs:
- db:
DataKit
Data container
- k: {
None
} |str
User-specified col name; if
None
, automatic value- defnamess:
list
List of applicable default names for the col
- title: {
"lookup"
} |str
Key title to use in any error messages
- error: {
True
} |False
Raise an exception if no col is found
- db:
- Outputs:
- col: k | defnamess[0] | defnamess[1] | …
Name of lookup key in db.cols
- Versions:
2018-06-22
@ddalle
: Version 1.0
- get_col_png(col)¶
Get name/tag of PNG image to use when plotting col
- Call:
>>> png = db.set_col_png(col)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Data column for to associate with png
- db:
- Outputs:
- png:
None
|str
Name/abbreviation/tag of PNG image to use
- png:
- Versions:
2020-04-02
@ddalle
: Version 1.0
- get_col_seam(col)¶
Get name/tag of seam curve to use when plotting col
- Call:
>>> png = db.get_col_seam(col)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Data column for to associate with png
- db:
- Outputs:
- seam:
str
Name used to tag seam curve
- seam:
- Versions:
2020-04-03
@jmeeroff
: Version 1.0
- get_fullfactorial(scol=None, cols=None)¶
Create full-factorial matrix of values in break points
This allows some of the break points cols to be scheduled, i.e. there are different matrices of cols for each separate value of scol.
- Call:
>>> X, slices = db.get_fullfactorial(scol=None, cols=None)
- Inputs:
- db:
DataKit
Data container
- scol: {
None
} |str
|list
Optional name of slicing col(s)
- cols: {
None
} |list
[str
] List of (ordered) input keys, default is from DBc.bkpts
- db:
- Outputs:
- X:
dict
Dictionary of full-factorial matrix
- slices:
dict
(ndarray
) Array of slice values for each col in scol
- X:
- Versions:
2018-11-16
@ddalle
: Version 1.0
- get_ndim(col)¶
Get database dimension for column col
- Call:
>>> ndim = db.get_ndim(col)
- Inputs:
- db:
cape.attdb.rdbscalar.DBResponseLinear
Database with multidimensional output functions
- col:
str
Name of column to evaluate
- db:
- Outputs:
- ndim: {
0
} |int
Dimension of col in database
- ndim: {
- Versions:
2020-03-12
@ddalle
: Version 1.0
- get_output_ndim(col)¶
Get output dimension for column col
- Call:
>>> ndim = db.get_output_ndim(col)
- Inputs:
- db:
cape.attdb.rdbscalar.DBResponseLinear
Database with multidimensional output functions
- col:
str
Name of column to evaluate
- db:
- Outputs:
- ndim: {
0
} |int
Dimension of col at a single condition
- ndim: {
- Versions:
2019-12-27
@ddalle
: Version 1.02020-03-12
@ddalle
: Keyed from “Dimension”
- get_output_xarg1(col)¶
Get single arg for output for column col
- Call:
>>> xarg = db.get_output_xarg1(col)
- Inputs:
- db:
cape.attdb.rdbscalar.DBResponseLinear
Database with multidimensional output functions
- col:
str
Name of column to evaluate
- db:
- Outputs:
- xarg:
None
|str
Input arg to function for one condition of col
- xarg:
- Versions:
2021-12-16
@ddalle
: Version 1.0
- get_output_xargs(col)¶
Get list of args to output for column col
- Call:
>>> xargs = db.get_output_xargs(col)
- Inputs:
- db:
cape.attdb.rdbscalar.DBResponseLinear
Database with multidimensional output functions
- col:
str
Name of column to evaluate
- db:
- Outputs:
- xargs: {
[]
} |list
[str
] List of input args to one condition of col
- xargs: {
- Versions:
2019-12-30
@ddalle
: Version 1.02020-03-27
@ddalle
: From db.defns to db.response_xargs
- get_png_fname(png)¶
Get name of PNG file
- Call:
>>> fpng = db.get_png_fname(png)
- Inputs:
- db:
DataKit
Database with scalar output functions
- png:
str
Name used to tag this PNG image
- db:
- Outputs:
- fpng:
None
|str
Name of PNG file, if any
- fpng:
- Versions:
2020-04-02
@ddalle
: Version 1.0
- get_png_kwargs(png)¶
Set evaluation keyword arguments for PNG file
- Call:
>>> kw = db.set_png_kwargs(png)
- Inputs:
- db:
DataKit
Database with scalar output functions
- png:
str
Name used to tag this PNG image
- db:
- Outputs:
- kw: {
{}
} |MPLOpts
Options to use when showing PNG image (copied)
- kw: {
- Versions:
2020-04-02
@ddalle
: Version 1.0
- get_rbf(col, *I)¶
Extract a radial basis function, with error checking
- Call:
>>> f = db.get_rbf(col, *I) >>> f = db.get_rbf(col) >>> f = db.get_rbf(col, i) >>> f = db.get_rbf(col, i, j) >>> f = db.get_rbf(col, i, j, ...)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of column to evaluate
- I:
tuple
Tuple of lookup indices
- i:
int
(Optional) first RBF list index
- j:
int
(Optional) second RBF list index
- db:
- Outputs:
- f:
scipy.interpolate.rbf.Rbf
Callable radial basis function
- f:
- Versions:
2018-12-31
@ddalle
: Version 1.02019-12-17
@ddalle
: Ported fromtnakit
- get_response_acol(col)¶
Get names of any aux cols related to primary col
- Call:
>>> acols = db.get_response_acol(col)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of data column to evaluate
- db:
- Outputs:
- acols:
list
[str
] Name of aux columns required to evaluate col
- acols:
- Versions:
2020-03-23
@ddalle
: Version 1.02020-04-21
@ddalle
: Rename eval_acols
- get_response_arg_aliases(col)¶
Get alias names for evaluation args for a data column
- Call:
>>> aliases = db.get_response_arg_aliases(col)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of data column to evaluate
- db:
- Outputs:
- aliases: {
{}
} |dict
Alternate names for args while evaluationg col
- aliases: {
- Versions:
2019-12-30
@ddalle
: Version 1.0
- get_response_arg_converter(k)¶
Get evaluation argument converter
- Call:
>>> f = db.get_response_arg_converter(k)
- Inputs:
- db:
DataKit
Database with scalar output functions
- k:
str
|unicode
Name of argument
- db:
- Outputs:
- f:
None
| callable Callable converter
- f:
- Versions:
2019-03-13
@ddalle
: Version 1.02019-12-18
@ddalle
: Ported fromtnakit
- get_response_args(col, argsdef=None)¶
Get list of evaluation arguments
- Call:
>>> args = db.get_response_args(col, argsdef=None)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of column to evaluate
- argsdef: {
None
} |list
[str
] Default arg list if none found in db
- db:
- Outputs:
- args:
list
[str
] List of parameters used to evaluate col
- args:
- Versions:
2019-03-11
@ddalle
: Forked from__call__()
2019-12-18
@ddalle
: Ported fromtnakit
2020-03-26
@ddalle
: Added argsdef2020-04-21
@ddalle
: Rename fromget_eval_args()
- get_response_func(col)¶
Get callable function predefined for a column
- Call:
>>> fn = db.get_response_func(col)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of data column to evaluate
- db:
- Outputs:
- fn:
None
| callable Specified function for col
- fn:
- Versions:
2019-12-28
@ddalle
: Version 1.0
- get_response_kwargs(col)¶
Get any keyword arguments passed to col evaluator
- Call:
>>> kwargs = db.get_response_kwargs(col)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of data column to evaluate
- db:
- Outputs:
- kwargs: {
{}
} |dict
Keyword arguments to add while evaluating col
- kwargs: {
- Versions:
2019-12-30
@ddalle
: Version 1.0
- get_response_method(col)¶
Get evaluation method (if any) for a column
- Call:
>>> method = db.get_response_method(col)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of column to evaluate
- db:
- Outputs:
- method:
None
|str
Name of evaluation method for col or
"_"
- method:
- Versions:
2019-03-13
@ddalle
: Version 1.02019-12-18
@ddalle
: Ported fromtnakit
2019-12-30
@ddalle
: Added default
- get_schedule(args, x, extrap=True)¶
Get lookup points for interpolation scheduled by master key
This is a utility that is used for situations where the break points of some keys may vary as a schedule of another one. For example if the maximum angle of attack in the database is different at each Mach number. This utility provides the appropriate point at which to interpolate the remaining keys at the value of the first key both above and below the input value. The first argument,
args[0]
, is the master key that controls the schedule.- Call:
>>> i0, i1, f, x0, x1 = db.get_schedule(args, x, **kw)
- Inputs:
- db:
DataKit
Database with scalar output functions
- args:
list
[str
] List of input argument names (args[0] is master key)
- x:
list
|tuple
|np.ndarray
Vector of values for each argument in args
- extrap: {
True
} |False
If
False
, raise error when lookup value is outside break point range for any key at any slice
- db:
- Outputs:
- i0:
None
|int
Lower bound index, if
None
, extrapolation below- i1:
None
|int
Upper bound index, if
None
, extrapolation above- f: 0 <=
float
<= 1 Lookup fraction,
1.0
if v is at upper bound- x0:
np.ndarray
[float
] Evaluation values for
args[1:]
at i0- x1:
np.ndarray
[float
] Evaluation values for
args[1:]
at i1
- i0:
- Versions:
2019-04-19
@ddalle
: Version 1.02019-07-26
@ddalle
: Vectorized2019-12-18
@ddalle
: Ported fromtnakit
- get_seam_col(seam)¶
Get column names that define named seam curve
- Call:
>>> xcol, ycol = db.get_seam_col(col)
- Inputs:
- db:
DataKit
Database with scalar output functions
- seam:
str
Name used to tag this seam curve
- db:
- Outputs:
- xcol:
str
Name of col for seam curve x coords
- ycol:
str
Name of col for seam curve y coords
- xcol:
- Versions:
2020-03-31
@ddalle
: Version 1.0
- get_seam_kwargs(seam)¶
Set evaluation keyword arguments for PNG file
- Call:
>>> kw = db.set_seam_kwargs(seam)
- Inputs:
- db:
DataKit
Database with scalar output functions
- seam:
str
Name used to tag this seam curve
- db:
- Outputs:
- kw: {
{}
} |MPLOpts
Options to use when showing seam curve (copied)
- kw: {
- Versions:
2020-04-03
@jmeeroff
: Version 1.0
- get_source(ext=None, n=None)¶
Get a source by category (and number), if possible
- Call:
>>> dbf = db.get_source(ext) >>> dbf = db.get_source(ext, n)
- Inputs:
- db:
DataKit
Generic database
- ext: {
None
} |str
Source type, by extension, to retrieve
- n: {
None
} |int
>= 0 Source number
- db:
- Outputs:
- dbf:
cape.attdb.ftypes.basefile.BaseFile
Data file interface
- dbf:
- Versions:
2020-02-13
@ddalle
: Version 1.0
- get_uq_acol(ucol)¶
Get name of aux data cols needed to compute UQ col
- Call:
>>> acols = db.get_uq_acol(ucol)
- Inputs:
- db:
DataKit
Database with scalar output functions
- ucol:
str
Name of UQ column to evaluate
- db:
- Outputs:
- acols:
list
[str
] Name of extra columns required for estimate ucol
- acols:
- Versions:
2020-03-23
@ddalle
: Version 1.0
- get_uq_afunc(ucol)¶
Get function to UQ column if aux cols are present
- Call:
>>> afunc = db.get_uq_afunc(ucol)
- Inputs:
- db:
DataKit
Database with scalar output functions
- ucol:
str
Name of UQ col to estimate
- db:
- Outputs:
- afunc: callable
Function to estimate ucol
- Versions:
2020-03-23
@ddalle
: Version 1.0
- get_uq_col(col)¶
Get name of UQ columns for col
- Call:
>>> ucol = db.get_uq_col(col)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of data column to evaluate
- db:
- Outputs:
- ucol:
None
|str
Name of UQ column for col
- ucol:
- Versions:
2019-03-13
@ddalle
: Version 1.02019-12-18
@ddalle
: Ported fromtnakit
2019-12-26
@ddalle
: Renamed fromget_uq_coeff()
- get_uq_ecol(ucol)¶
Get names of any extra UQ cols related to primary UQ col
- Call:
>>> ecols = db.get_uq_ecol(ucol)
- Inputs:
- db:
DataKit
Database with scalar output functions
- ucol:
str
Name of UQ column to evaluate
- db:
- Outputs:
- ecols:
list
[str
] Name of extra columns required to evaluate ucol
- ecols:
- Versions:
2020-03-21
@ddalle
: Version 1.0
- get_uq_efunc(ecol)¶
Get function to evaluate extra UQ column
- Call:
>>> efunc = db.get_uq_efunc(ecol)
- Inputs:
- db:
DataKit
Database with scalar output functions
- ecol:
str
Name of (correlated) UQ column to evaluate
- db:
- Outputs:
- efunc: callable
Function to evaluate ecol
- Versions:
2020-03-20
@ddalle
: Version 1.0
- get_values(col, mask=None)¶
Attempt to get all or some values of a specified column
This will use db.response_arg_converters if possible.
- Call:
>>> V = db.get_values(col) >>> V = db.get_values(col, mask=None) >>> V = db.get_values(col, mask_index)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of evaluation argument
- I:
np.ndarray
[int
|bool
] Optional subset of db indices to access
- db:
- Outputs:
- V:
None
|np.ndarray
[float
] db[col] if available, otherwise an attempt to apply db.response_arg_converters[col]
- V:
- Versions:
2020-02-21
@ddalle
: Version 1.0
- get_xvals(col, I=None, **kw)¶
Get values of specified column, which may need conversion
This function can be used to calculate independent variables (xvars) that are derived from extant data columns. For example if columns alpha and beta (for angle of attack and angle of sideslip, respectively) are present and the user wants to get the total angle of attack aoap, this function will attempt to use
db.response_arg_converters["aoap"]
to convert available alpha and beta data.- Call:
>>> V = db.get_xvals(col, I=None, **kw)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of column to access
- I:
None
|np.ndarray
|int
Subset indices or single index
- kw:
dict
Dictionary of values in place of db (e.g. kw[col] instead of db[col])
- IndexKW:
True
| {False
} Option to use kw[col][I] instead of just kw[col]
- db:
- Outputs:
- V:
np.ndarray
|float
Array of values or scalar for column col
- V:
- Versions:
2019-03-12
@ddalle
: Version 1.02019-12-26
@ddalle
: Fromtnakit.db.db1
- get_xvals_eval(k, *a, **kw)¶
Return values of a column from inputs to
__call__()
For example, this can be used to derive the total angle of attack from inputs to an evaluation call to CN when it is a function of mach, alpha, and beta. This method attempts to use
db.response_arg_converters()
.- Call:
>>> V = db.get_xvals_eval(k, *a, **kw) >>> V = db.get_xvals_eval(k, coeff, x1, x2, ..., k3=x3)
- Inputs:
- db:
DataKit
Database with scalar output functions
- k:
str
Name of key to calculate
- col:
str
Name of output data column
- x1:
float
|np.ndarray
Value(s) of first argument
- x2:
float
|np.ndarray
Value(s) of second argument, if applicable
- k3:
str
Name of third argument or optional variant
- x3:
float
|np.ndarray
Value(s) of argument k3, if applicable
- db:
- Outputs:
- V:
np.ndarray
Values of key k from conditions in a and kw
- V:
- Versions:
2019-03-12
@ddalle
: Version 1.02019-12-26
@ddalle
: Fromtnakit
- get_yvals_exact(col, I=None, **kw)¶
Get exact values of a data column
- Call:
>>> V = db.get_yvals_exact(col, I=None, **kw)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of column to access
- I: {
None
} |np.ndarray
[int
] Database indices
- db:
- Versions:
2019-03-13
@ddalle
: Version 1.02019-12-26
@ddalle
: Fromtnakit
- infer_rbf(col, vals=None, **kw)¶
Infer a radial basis function response mechanism
This looks for columns with specific suffixes in order to create a Radial Basis Function (RBF) response mechanism in db. Suppose that col is
"CY"
for this example, then this function will look for the following columns, either in col or vals:"CY"
: nominal values at which RBF was created"CY_method"
: response method index"CY_rbf"
: weights of RBF nodes"CY_func"
: RBF basis function index"CY_eps"
: scaling parameter for (each) RBF"CY_smooth:
: RBF smoothing parameter"CY_N"
: number of nodes in (each) RBF"CY_xcols"
: explicit list of RBF argument names"CY_X"
: 2D matrix of values of RBF args"CY_x0"
: values of first argument if not global RBF
The CY_method column will repeat one of the following values:
4
:"rbf"
5
:"rbf-map"
6
:"rbf-schedule"
The CY_func legend is as follows:
0
:"multiquadric"
1
:"inverse_multiquadric"
2
:"gaussian"
3
:"linear"
4
:"cubic"
5
:"quintic"
6
:"thin_plate"
- Call:
>>> db.infer_rbf(col, vals=None, **kw)
- Inputs:
- db:
DataKit
DataKit where db.rbf[col] will be defined
- col:
str
Name of column whose RBF will be constructed
- vals:
dict
[np.ndarray
] Data to use in RBF creation in favor of db
- db:
- Effects:
- db[col]:
np.ndarray
[float
] Values of col used in RBF
- db[xcol]:
np.ndarray
[float
] Values of RBF args saved for each xcol
- db.bkpts[xcol]:
np.ndarray
[float
] Break points for each RBF arg
- db.rbf[col]:
Rbf
|list
One or more SciPy radial basis function instances
- db.response_methods[col]:
str
Name of inferred response method
- db[col]:
- Versions:
2021-09-16
@ddalle
: Version 1.0
- infer_rbfs(cols, **kw)¶
Infer radial basis function responses for several cols
- Call:
>>> db.infer_rbfs(cols, **kw)
- Inputs:
- db:
DataKit
DataKit where db.rbf[col] will be defined
- cols:
list
[str
] Name of column whose RBF will be constructed
- xcols: {
None
} |list
[str
] Explicit list of arguments for all cols
- db:
- See Also:
- Versions:
2021-09-16
@ddalle
: Version 1.0
- link_data(dbsrc, cols=None, **kw)¶
Save one or more cols from another database
- Call:
>>> db.link_data(dbsrc, cols=None)
- Inputs:
- db:
DataKit
Data container
- dbsrc:
dict
Additional data container, not required to be a datakit
- cols: {
None
} |list
[str
] List of columns to link (or dbsrc.cols)
- append:
True
| {False
} Option to append data (or replace it)
- prefix: {
None
} |str
Prefix applied to dbsrc col when saved in db
- suffix: {
None
} |str
Prefix applied to dbsrc col when saved in db
- db:
- Effects:
- db.cols:
list
[str
] Appends each col in cols where not present
- db[col]: dbsrc[col]
Reference to dbsrc data for each col
- db.cols:
- Versions:
2019-12-06
@ddalle
: Version 1.02021-09-10
@ddalle
: Version 1.1; prefix and suffix
- link_db(dbsrc, init=True)¶
Link attributes from another DataKit
- Call:
>>> qdb = db.link_db(dbsrc, init=True)
- Inputs:
- Outputs:
- qdb:
True
|False
Whether or not dbsrc was linked
- qdb:
- Versions:
2021-07-20
@ddalle
: Version 1.0
- lstrip_colname(col, prefix)¶
Remove a prefix from a column name
This maintains component names, so for example if col is
"bullet.UCN"
, and prefix is"U"
, the result is"bullet.CN"
.- Call:
>>> newcol = db.lstrip_colname(col, prefix)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of column to strip
- prefix:
str
Prefix to remove
- db:
- Outputs:
- newcol:
str
Prefixed name
- newcol:
- Versions:
2020-03-24
@ddalle
: Version 1.0
- make_integral(col, xcol=None, ocol=None, **kw)¶
Integrate the columns of a 2D data col
This method will not perform integration if ocol is already present in the database.
- Call:
>>> y = db.make_integral(col, xcol=None, ocol=None, **kw)
- Inputs:
- db:
DataKit
Database with analysis tools
- col:
str
Name of data column to integrate
- xcol: {
None
} |str
Name of column to use as x-coords for integration
- ocol: {
col[1:]
} |str
Name of col to store result in
- mask:
np.ndarray
[bool
|int
] Mask or indices of which cases to integrate
- x: {
None
} |np.ndarray
Optional 1D or 2D x-coordinates directly specified
- dx: {
1.0
} |float
Uniform spacing to use if xcol and x are not used
- method: {
"trapz"
} |"left"
|"right"
| callable Integration method or callable function taking two args like
np.trapz()
- db:
- Outputs:
- y:
np.ndarray
1D array of integral of each column of db[col]
- y:
- Versions:
2020-06-10
@ddalle
: Version 1.0
- make_png(png, fpng, cols=None, **kw)¶
Set all parameters to describe PNG image
- Call:
>>> db.make_png(png, fpng, cols, **kw)
- Inputs:
- db:
DataKit
Database with scalar output functions
- png:
str
Name used to tag this PNG image
- fpng:
str
Name of PNG file
- kw: {
{}
} |dict
Options to use when showing PNG image
- db:
- See Also:
- Versions:
2020-04-02
@ddalle
: Version 1.0
- make_response(col, method, args, *a, **kw)¶
Set evaluation method for a single column
- Call:
>>> db.make_response(col, method, args, **kw)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of column for which to declare evaluation rules
- method:
"nearest"
|"linear"
|str
Response (lookup/interpolation/evaluation) method name
- args:
list
[str
] List of input arguments
- a:
tuple
Args passed to constructor, if used
- ndim: {
0
} |int
>= 0 Output dimensionality
- aliases: {
{}
} |dict
[str
] Dictionary of alternate variable names during evaluation; if aliases[k1] is k2, that means k1 is an alternate name for k2, and k2 is in args
- response_kwargs: {
{}
} |dict
Keyword arguments passed to functions
- I: {
None
} |np.ndarray
Indices of cases to include in response surface {all}
- function: {
"cubic"
} |str
Radial basis function type
- smooth: {
0.0
} |float
>= 0 Smoothing factor for methods that allow inexact interpolation,
0.0
for exact interpolation- func: callable
Function to use for
"function"
method- extracols: {
None
} |set
|list
Additional col names that might be used as kwargs
- db:
- Versions:
2019-01-07
@ddalle
: Version 1.02019-12-18
@ddalle
: Ported fromtnakit
2019-12-30
@ddalle
: Version 2.0; map of methods2020-02-18
@ddalle
: Name from_set_method1()
2020-03-06
@ddalle
: Name fromset_response()
2020-04-24
@ddalle
: Add response_arg_alternates
- make_responses(cols, method, args, *a, **kw)¶
Set evaluation method for a list of columns
- Call:
>>> db.make_responses(cols, method, args, *a, **kw)
- Inputs:
- db:
DataKit
Database with scalar output functions
- cols:
list
[str
] List of columns for which to declare evaluation rules
- method:
"nearest"
|"linear"
|str
Response (lookup/interpolation/evaluation) method name
- args:
list
[str
] List of input arguments
- a:
tuple
Args passed to constructor, if used
- aliases: {
{}
} |dict
[str
] Dictionary of alternate variable names during evaluation; if aliases[k1] is k2, that means k1 is an alternate name for k2, and k2 is in args
- response_kwargs: {
{}
} |dict
Keyword arguments passed to functions
- I: {
None
} |np.ndarray
Indices of cases to include in response {all}
- function: {
"cubic"
} |str
Radial basis function type
- smooth: {
0.0
} |float
>= 0 Smoothing factor for methods that allow inexact interpolation,
0.0
for exact interpolation
- db:
- Versions:
2019-01-07
@ddalle
: Version 1.02019-12-18
@ddalle
: Ported fromtnakit
2020-02-18
@ddalle
: Name fromSetEvalMethod()
2020-03-06
@ddalle
: Name fromset_responses()
- make_seam(seam, fseam, xcol, ycol, cols, **kw)¶
Define and read a seam curve
- Call:
>>> db.make_seam(seam, fseam, xcol, ycol, cols, **kw)
- Inputs:
- db:
DataKit
Database with scalar output functions
- seam:
str
Name used to tag this seam curve
- fseam:
str
Name of seam curve file written by
triload
- xcol:
str
Name of col for seam curve x coords
- ycol:
str
Name of col for seam curve y coords
- kw: {
{}
} |dict
Options to use when plotting seam curve
- db:
- See Also:
- Versions:
2020-04-03
@ddalle
: Version 1.0
- make_source(ext, cls, n=None, cols=None, save=True, **kw)¶
Get or create a source by category (and number)
- Call:
>>> dbf = db.make_source(ext, cls) >>> dbf = db.make_source(ext, cls, n=None, cols=None, **kw)
- Inputs:
- db:
DataKit
Generic database
- ext:
str
Source type, by extension, to retrieve
- cls:
type
Subclass of
BaseFile
to create (if needed)- n: {
None
} |int
>= 0 Source number to search for
- cols: {db.cols} |
list
[str
] List of data columns to include in dbf
- save: {
True
} |False
Option to save dbf in db.sources
- attrs: {
None
} |list
[str
] Extra attributes of db to save for
.mat
files
- db:
- Outputs:
- dbf:
cape.attdb.ftypes.basefile.BaseFile
Data file interface
- dbf:
- Versions:
2020-02-13
@ddalle
: Version 1.02020-03-06
@ddalle
: Rename fromget_dbf()
- match(dbt, maskt=None, cols=None, **kw)¶
Find cases with matching values of specified list of cols
- Call:
>>> I, J = db.match(dbt, maskt, cols=None, **kw) >>> Imap, J = db.match(dbt, **kw)
- Inputs:
- db:
DataKit
Data kit with response surfaces
- dbt:
dict
|DataKit
Target data set
- maskt:
np.ndarray
[bool
|int
] Subset of dbt to consider
- mask:
np.ndarray
[bool
|int
] Subset of db to consider
- cols: {
None
} |np.ndarray
[int
] List of cols to compare (default all db float cols)
- tol: {
1e-4
} |float
>= 0 Default tolerance for all args
- tols: {
{}
} |dict
[float
>= 0] Dictionary of tolerances specific to arguments
- once:
True
| {False
} Option to find max of one db index per test point
- mapped:
True
| {False
} Option to switch output to Imap (overrides once)
- kw:
dict
Additional values to use during evaluation
- db:
- Outputs:
- I:
np.ndarray
[int
] Indices of cases in db that have a match in dbt
- J:
np.ndarray
[int
] Indices of cases in dbt that have a match in db
- Imap:
list
[np.ndarray
] List of db indices for each test point in J
- I:
- Versions:
2020-02-20
@ddalle
: Version 1.02020-03-06
@ddalle
: Name fromfind_pairwise()
- normalize_args(x, asarray=False)¶
Normalized mixed float and array arguments
- Call:
>>> X, dims = db.normalize_args(x, asarray=False)
- Inputs:
- db:
DataKit
Database with scalar output functions
- x:
list
[float
|np.ndarray
] Values for arguments, either float or array
- asarray:
True
| {False
} Force array output (otherwise allow scalars)
- db:
- Outputs:
- X:
list
[float
|np.ndarray
] Normalized arrays/floats all with same size
- dims:
tuple
[int
] Original dimensions of non-scalar input array
- X:
- Versions:
2019-03-11
@ddalle
: Version 1.02019-03-14
@ddalle
: Added asarray input2019-12-18
@ddalle
: Ported fromtnakit
2019-12-18
@ddalle
: Removed@staticmethod
- plot(*a, **kw)¶
Plot a scalar or linear data column
This function tests the output dimension of col. For a standard data column, which is a scalar, this will pass the args to
plot_scalar()
. Ifdb.get_ndim(col)
is2
, however (for example a line load),plot_linear()
will be called instead.- Call:
>>> h = db.plot(col, *a, **kw) >>> h = db.plot(col, I, **kw)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Data column (or derived column) to evaluate
- a:
tuple
[np.ndarray
|float
] Array of values for arguments to evaluator for col
- I:
np.ndarray
[int
] Indices of exact entries to plot
- xcol, xk:
str
Key/column name for x axis
- db:
- Keyword Arguments:
- Index, i: {
0
} |int
>=0 Index to select specific option from lists
- Rotate, rotate: {
True
} |False
Option to flip x and y axes
- PlotOptions, PlotOpts: {
None
} |dict
Options to
plt.plot()
for primary curve- PlotFormat, fmt: {
None
} |str
Format specifier as third arg to
plot()
- Label, label, lbl: {
None
} |str
Label passed to
plt.legend()
- PlotColor: {
None
} |str
|tuple
Color option to
plt.plot()
for primary curve- PlotLineWidth: {
None
} |int
> 0 |float
> 0.0 Line width for primary
plt.plot()
- PlotLineStyle:
":"
|"-"
|"none"
|"-."
|"--"
Line style for primary
plt.plot()
- Index, i: {
- Outputs:
- h:
plot_mpl.MPLHandle
Object of
matplotlib
handles
- h:
- See Also:
- Versions:
2020-04-20
@ddalle
: Version 1.0
- plot_contour(*a, **kw)¶
Create a contour plot of one col vs two others
- Call:
>>> h = db.plot_contour(col, *a, **kw) >>> h = db.plot_contour(col, mask, **kw) >>> h = db.plot_contour(col, mask_index, **kw)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Data column (or derived column) to evaluate
- a:
tuple
[np.ndarray
|float
] Array of values for arguments to evaluator for col
- mask:
np.ndarray
[bool
] Mask of which points to include in plot
- mask_index:
np.ndarray
[int
] Indices of points to include in plot
- xcol, xk:
str
Name of column to use for x axis
- ycol, yk:
str
Name of column to use for y axis
- db:
- Keyword Arguments:
- Index, i: {
0
} |int
>=0 Index to select specific option from lists
- Rotate, rotate: {
True
} |False
Option to flip x and y axes
- ContourType: {
tricontourf
} |tricontour
|tripcolor
Contour type specifier
- ContourLevels: {
None
} |int
|np.ndarray
Number or list of levels for contour plots
- ContourOptions: {
None
} |dict
Options to
plt.tricontour()
and variants- MarkPoints: {
True
} |False
Put a marker at contributing data points
- MarkerColor: {
None
} |str
|tuple
Color for markers in MarkerOptions
- MarkerOptions: {
None
} |dict
Options for markers on non-plot() functions
- MarkerSize: {
None
} |int
|float
markersize passed to MarkerOptions
- Label, label, lbl: {
None
} |str
Label passed to
plt.legend()
- ContourColorMap: {
None
} |str
Color map for contour plots
- Density, density: {
True
} |False
Option to scale histogram plots
- Index, i: {
0
} |int
>=0 Index to select specific option from lists
- Pad: {
None
} |float
Padding to add to both axes, ax.set_xlim and ax.set_ylim
- Rotate, rotate: {
True
} |False
Option to flip x and y axes
- XLabel, xlabel: {
None
} |str
Label to put on x axis
- XLim, xlim: {
None
} | (float
,float
) Limits for min and max value of x-axis
- XLimMax: {
None
} |float
Min value for x-axis in plot
- XLimMin: {
None
} |float
Max value for x-axis in plot
- XPad: {Pad} |
float
Extra padding to add to x axis limits
- YLabel, ylabel: {
None
} |str
Label to put on y axis
- YLim, ylim: {
None
} | (float
,float
) Limits for min and max value of y-axis
- YLimMax: {
None
} |float
Min value for y-axis in plot
- YLimMin: {
None
} |float
Max value for y-axis in plot
- YPad: {Pad} |
float
Extra padding to add to y axis limits
- Index, i: {
- Outputs:
- h:
plot_mpl.MPLHandle
Object of
matplotlib
handles
- h:
- Versions:
2020-04-24
@ddalle
: Version 1.0
- plot_linear(*a, **kw)¶
Plot a 1D-output col for one or more cases or conditions
- Call:
>>> h = db.plot_linear(col, *a, **kw) >>> h = db.plot_linear(col, I, **kw)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Data column (or derived column) to evaluate
- a:
tuple
[np.ndarray
|float
] Array of values for arguments to evaluator for col
- I:
np.ndarray
[int
] Indices of exact entries to plot
- xcol, xk: {db.response_xargs[col][0]} |
str
Key/column name for x axis
- db:
- Keyword Arguments:
- Index, i: {
0
} |int
>=0 Index to select specific option from lists
- Rotate, rotate: {
True
} |False
Option to flip x and y axes
- PlotOptions, PlotOpts: {
None
} |dict
Options to
plt.plot()
for primary curve- PlotFormat, fmt: {
None
} |str
Format specifier as third arg to
plot()
- ShowSeam: {
True
} |False
Override default seam curve status
- ShowPNG: {
True
} |False
Override default line load PNG status
- Label, label, lbl: {
None
} |str
Label passed to
plt.legend()
- PlotColor: {
None
} |str
|tuple
Color option to
plt.plot()
for primary curve- PlotLineWidth: {
None
} |int
> 0 |float
> 0.0 Line width for primary
plt.plot()
- PlotLineStyle:
":"
|"-"
|"none"
|"-."
|"--"
Line style for primary
plt.plot()
- Index, i: {
- Outputs:
- h:
plot_mpl.MPLHandle
Object of
matplotlib
handles
- h:
- Versions:
2020-03-30
@ddalle
: Version 1.0
- plot_png(col, fig=None, h=None, **kw)¶
Show tagged PNG image in new axes
- Call:
>>> h = db.plot_png(col, fig=None, h=None, **kw)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of data column being plotted
- png: {db.cols_png[col]} |
str
Name used to tag this PNG image
- fig: {
None
} |Figure
|int
Name or number of figure in which to plot image
- h: {
None
} |cape.tnakit.plot_mpl.MPLHandle
Optional existing handle to various plot objects
- db:
- Outputs:
- h:
cape.tnakit.plot_mpl.MPLHandle
Plot object container
- h.img:
matplotlib.image.AxesImage
PNG image object
- h.ax_img:
AxesSubplot
Axes handle in wich h.img is shown
- h:
- Versions:
2020-04-02
@ddalle
: Version 1.0
- plot_raw(x, y, **kw)¶
Plot 1D data sets directly, without response functions
- Call:
>>> h = db.plot_raw(xk, yk, **kw) >>> h = db.plot_raw(xv, yv, **kw)
- Inputs:
- db:
DataKit
Database with scalar output functions
- xk:
str
Name of col to use for x-axis
- yk:
str
Name of col to use for y-axis
- xv:
np.ndarray
Directly specified values for x-axis
- yv:
np.ndarray
Directly specified values for y-axis
- mask:
np.ndarray
[bool
|int
] Mask of which points to include in plot
- db:
- Outputs:
- h:
plot_mpl.MPLHandle
Object of
matplotlib
handles
- h:
- Versions:
2020-12-31
@ddalle
: Version 1.0
- plot_scalar(*a, **kw)¶
Plot a sweep of one data column over several cases
This is the base method for plotting scalar cols. Other methods may call this one with modifications to the default settings.
- Call:
>>> h = db.plot_scalar(col, *a, **kw) >>> h = db.plot_scalar(col, I, **kw)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Data column (or derived column) to evaluate
- a:
tuple
[np.ndarray
|float
] Array of values for arguments to evaluator for col
- I:
np.ndarray
[int
] Indices of exact entries to plot
- xcol, xk: {
None
} |str
Key/column name for x axis
- PlotExact:
True
|False
Plot exact values directly from database without interpolation. Default is
True
if I is used- PlotInterp:
True
|False
Plot values by using
DBc.__call__()
- MarkExact:
True
|False
Mark interpolated curves with markers where actual data points are present
- db:
- Keyword Arguments:
- Index, i: {
0
} |int
>=0 Index to select specific option from lists
- Rotate, rotate: {
True
} |False
Option to flip x and y axes
- PlotOptions, PlotOpts: {
None
} |dict
Options to
plt.plot()
for primary curve- PlotFormat, fmt: {
None
} |str
Format specifier as third arg to
plot()
- Label, label, lbl: {
None
} |str
Label passed to
plt.legend()
- PlotColor: {
None
} |str
|tuple
Color option to
plt.plot()
for primary curve- PlotLineWidth: {
None
} |int
> 0 |float
> 0.0 Line width for primary
plt.plot()
- PlotLineStyle:
":"
|"-"
|"none"
|"-."
|"--"
Line style for primary
plt.plot()
- Index, i: {
- Outputs:
- h:
plot_mpl.MPLHandle
Object of
matplotlib
handles
- h:
- Versions:
2015-05-30
@ddalle
: Version 1.02015-12-14
@ddalle
: Added error bars2019-12-26
@ddalle
: Fromtnakit.db.db1
2020-03-30
@ddalle
: Redocumented
- plot_seam(col, fig=None, h=None, **kw)¶
Show tagged seam curve in new axes
- Call:
>>> h = db.plot_seam(col, fig=None, h=None, **kw)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of data column being plotted
- png: {db.cols_png[col]} |
str
Name used to tag this PNG image
- fig: {
None
} |Figure
|int
Name or number of figure in which to plot image
- h: {
None
} |cape.tnakit.plot_mpl.MPLHandle
Optional existing handle to various plot objects
- db:
- Outputs:
- h:
cape.tnakit.plot_mpl.MPLHandle
Plot object container
- h.lines_seam:
list
[matplotlib.Line2D
] Seam curve handle
- h.ax_seam:
AxesSubplot
Axes handle in wich h.seam is shown
- h:
- Versions:
2020-04-02
@ddalle
: Version 1.0
- prep_mask(mask, col=None, V=None)¶
Prepare logical or index mask
- Call:
>>> I = db.prep_mask(mask, col=None, V=None) >>> I = db.prep_mask(mask_index, col=None, V=None)
- Inputs:
- db:
DataKit
Data container
- mask: {
None
} |np.ndarray
[bool
] Logical mask of
True
/False
values- mask_index:
np.ndarray
[int
] Indices of db[col] to consider
- col: {
None
} |str
Reference column to use for size checks
- V: {
None
} |np.ndarray
Array of values to test shape/values of mask
- db:
- Outputs:
- I:
np.ndarray
[int
] Indices of db[col] to consider
- I:
- Versions:
2020-03-09
@ddalle
: Version 1.0
- prepend_colname(col, prefix)¶
Add a prefix to a column name
This maintains component names, so for example if col is
"bullet.CN"
, and prefix is"U"
, the result is"bullet.UCN"
.- Call:
>>> newcol = db.prepend_colname(col, prefix)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of column to prepend
- prefix:
str
Prefix to prefix
- db:
- Outputs:
- newcol:
str
Prefixed name
- newcol:
- Versions:
2020-03-24
@ddalle
: Version 1.0
- rcall(*a, **kw)¶
Evaluate predefined response method
- Call:
>>> v = db.rcall(*a, **kw) >>> v = db.rcall(col, x0, x1, ...) >>> V = db.rcall(col, x0, X1, ...) >>> v = db.rcall(col, k0=x0, k1=x1, ...) >>> V = db.rcall(col, k0=x0, k1=X1, ...)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of column to evaluate
- x0:
float
|int
Numeric value for first argument to col response
- x1:
float
|int
Numeric value for second argument to col response
- X1:
np.ndarray
[float
] Array of x1 values
- k0:
str
|unicode
Name of first argument to col response
- k1:
str
|unicode
Name of second argument to col response
- db:
- Outputs:
- v:
float
|int
Function output for scalar evaluation
- V:
np.ndarray
[float
] Array of function outputs
- v:
- Versions:
2019-01-07
@ddalle
: Version 1.02019-12-30
@ddalle
: Version 2.0: map of methods2020-04-20
@ddalle
: Moved meat from__call__()
- rcall_exact(col, args, *a, **kw)¶
Evaluate a coefficient by looking up exact matches
- Call:
>>> v = db.rcall_exact(col, args, *a, **kw) >>> V = db.rcall_exact(col, args, *a, **kw)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of column to evaluate
- args:
list
|tuple
List of explanatory col names (numeric)
- a:
tuple
[float
|np.ndarray
] Tuple of values for each argument in args
- tol: {
1.0e-4
} |float
> 0 Default tolerance for exact match
- tols: {
{}
} |dict
[float
> 0] Dictionary of key-specific tolerances
- kw:
dict
[float
|np.ndarray
] Alternate keyword arguments
- db:
- Outputs:
- v:
None
|float
Value of db[col] exactly matching conditions a
- V:
np.ndarray
[float
] Multiple values matching exactly
- v:
- Versions:
2018-12-30
@ddalle
: Version 1.02019-12-17
@ddalle
: Ported fromtnakit
2020-04-24
@ddalle
: Switched args totuple
2020-05-19
@ddalle
: Support for 2D cols
- rcall_from_arglist(col, args, *a, **kw)¶
Evaluate column from arbitrary argument list
This function is used to evaluate a col when given the arguments to some other column.
- Call:
>>> V = db.rcall_from_arglist(col, args, *a, **kw) >>> V = db.rcall_from_arglist(col, args, x0, X1, ...) >>> V = db.rcall_from_arglist(col, args, k0=x0, k1=x1, ...) >>> V = db.rcall_from_arglist(col, args, k0=x0, k1=X1, ...)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of column to evaluate
- args:
list
[str
] List of arguments provided
- x0:
float
|int
Numeric value for first argument to col evaluator
- x1:
float
|int
Numeric value for second argument to col evaluator
- X1:
np.ndarray
[float
] Array of x1 values
- k0:
str
Name of first argument to col evaluator
- k1:
str
Name of second argument to col evaluator
- db:
- Outputs:
- V:
float
|np.ndarray
Values of col as appropriate
- V:
- Versions:
2019-03-13
@ddalle
: Version 1.02019-12-26
@ddalle
: Fromtnakit
- rcall_from_index(col, I, **kw)¶
Evaluate data column from indices
This function has the same output as accessing
db[col][I]
if col is directly present in the database. However, it’s possible that col can be evaluated by some other technique, in which case direct access would fail but this function may still succeed.This function looks up the appropriate input variables and uses them to generate inputs to the database evaluation method.
- Call:
>>> V = db.rcall_from_index(col, I, **kw) >>> v = db.rcall_from_index(col, i, **kw)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of column to evaluate
- I:
np.ndarray
[int
] Indices at which to evaluate function
- i:
int
Single index at which to evaluate
- db:
- Outputs:
- V:
np.ndarray
Values of col as appropriate
- v:
float
Scalar evaluation of col
- V:
- Versions:
2019-03-13
@ddalle
: Version 1.02019-12-26
@ddalle
: Fromtnakit
- rcall_function(col, args, *x, **kw)¶
Evaluate a single user-saved function
- Call:
>>> y = db.rcall_function(col, args, *x)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of column to evaluate
- args:
list
|tuple
List of lookup key names
- x:
tuple
Values for each argument in args
- db:
- Outputs:
- y:
None
|float
|DBc[coeff].__class__
Interpolated value from
DBc[coeff]
- y:
- Versions:
2018-12-31
@ddalle
: Version 1.02019-12-17
@ddalle
: Ported fromtnakit
- rcall_inverse_distance(col, args, *a, **kw)¶
Evaluate a col using inverse-distance interpolation
- Call:
>>> v = db.rcall_inverse_distance(col, args, *a, **kw)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of (numeric) column to evaluate
- args:
list
|tuple
List of explanatory col names (numeric)
- a:
tuple
[float
|np.ndarray
] Tuple of values for each argument in args
- db:
- Outputs:
- y:
float
| db[col].__class__ Value of db[col] at point closest to a
- y:
- Versions:
2023-01-30
@ddalle
: Version 1.0
- rcall_multilinear(col, args, *x, **kw)¶
Perform linear interpolation in n dimensions
This assumes the database is ordered with the first entry of args varying the most slowly and that the data is perfectly regular.
- Call:
>>> y = db.rcall_multilinear(col, args, *x)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of column to evaluate
- args:
list
|tuple
List of lookup key names
- x:
list
|tuple
|np.ndarray
Vector of values for each argument in args
- bkpt:
True
| {False
} Flag to interpolate break points instead of data
- db:
- Outputs:
- y:
None
|float
|db[col].__class__
Interpolated value from
db[col]
- y:
- Versions:
2018-12-30
@ddalle
: Version 1.02019-12-17
@ddalle
: Ported fromtnakit
- rcall_multilinear_schedule(col, args, *x, **kw)¶
Perform “scheduled” linear interpolation in n dimensions
This assumes the database is ordered with the first entry of args varying the most slowly and that the data is perfectly regular. However, each slice at a constant value of args[0] may have separate break points for all the other args. For example, the matrix of angle of attack and angle of sideslip may be different at each Mach number. In this case, db.bkpts will be a list of 1D arrays for alpha and beta and just a single 1D array for mach.
- Call:
>>> y = db.rcall_multilinear_schedule(col, args, x)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of column to evaluate
- args:
list
|tuple
List of lookup key names
- x:
tuple
Values for each argument in args
- tol: {
1e-6
} |float
>= 0 Tolerance for matching slice key
- db:
- Outputs:
- y:
None
|float
|db[col].__class__
Interpolated value from
db[col]
- y:
- Versions:
2019-04-19
@ddalle
: Version 1.02019-12-17
@ddalle
: Ported fromtnakit
- rcall_nearest(col, args, *a, **kw)¶
Evaluate a coefficient by looking up nearest match
- Call:
>>> v = db.rcall_nearest(col, args, *a, **kw)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of (numeric) column to evaluate
- args:
list
|tuple
List of explanatory col names (numeric)
- a:
tuple
[float
|np.ndarray
] Tuple of values for each argument in args
- weights: {
{}
} |dict
(float
> 0) Dictionary of arg-specific distance weights
- db:
- Outputs:
- y:
float
| db[col].__class__ Value of db[col] at point closest to a
- y:
- Versions:
2018-12-30
@ddalle
: Version 1.02019-12-17
@ddalle
: Ported fromtnakit
2020-04-24
@ddalle
: Switched args totuple
2020-05-19
@ddalle
: Support for 2D cols
- rcall_rbf(col, args, *x, **kw)¶
Evaluate a single radial basis function
- Call:
>>> y = DBc.rcall_rbf(col, args, *x)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of column to evaluate
- args:
list
|tuple
List of lookup key names
- x:
tuple
Values for each argument in args
- db:
- Outputs:
- y:
float
|np.ndarray
Interpolated value from db[col]
- y:
- Versions:
2018-12-31
@ddalle
: Version 1.02019-12-17
@ddalle
: Ported fromtnakit
- rcall_rbf_linear(col, args, *x, **kw)¶
Evaluate two RBFs at slices of first arg and interpolate
- Call:
>>> y = db.rcall_rbf_linear(col, args, x)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of column to evaluate
- args:
list
|tuple
List of lookup key names
- x:
tuple
Values for each argument in args
- db:
- Outputs:
- y:
float
|np.ndarray
Interpolated value from db[col]
- y:
- Versions:
2018-12-31
@ddalle
: Version 1.02019-12-17
@ddalle
: Ported fromtnakit
- rcall_rbf_schedule(col, args, *x, **kw)¶
Evaluate two RBFs at slices of first arg and interpolate
- Call:
>>> y = db.rcall_rbf_schedule(col, args, *x)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of column to evaluate
- args:
list
|tuple
List of lookup key names
- x:
tuple
Values for each argument in args
- db:
- Outputs:
- y:
float
|np.ndarray
Interpolated value from db[col]
- y:
- Versions:
2018-12-31
@ddalle
: Version 1.0
- rcall_uq(*a, **kw)¶
Evaluate specified UQ cols for a specified col
This function will evaluate the UQ cols specified for a given nominal column by referencing the appropriate subset of db.response_args for any UQ cols. It evaluates the UQ col named in db.uq_cols. For example if CN is a function of
"mach"
,"alpha"
, and"beta"
;db.uq_cols["CN"]
is UCN; and UCN is a function of"mach"
only, this function passes only the Mach numbers to UCN for evaluation.- Call:
>>> U = db.rcall_uq(*a, **kw) >>> U = db.rcall_uq(col, x0, X1, ...) >>> U = db.rcall_uq(col, k0=x0, k1=x1, ...) >>> U = db.rcall_uq(col, k0=x0, k1=X1, ...)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of nominal column to evaluate
- db.uq_cols:
dict
[str
] Dictionary of UQ col names for each col
- x0:
float
|int
Numeric value for first argument to col evaluator
- x1:
float
|int
Numeric value for second argument to col evaluator
- X1:
np.ndarray
[float
] Array of x1 values
- k0:
str
Name of first argument to col evaluator
- k1:
str
Name of second argument to col evaluator
- db:
- Outputs:
- U:
dict
[float
|np.ndarray
] Values of relevant UQ col(s) by name
- U:
- Versions:
2019-03-07
@ddalle
: Version 1.02019-12-26
@ddalle
: Fromtnakit
- read_csv(fname, **kw)¶
Read data from a CSV file
- Call:
>>> db.read_csv(fname, **kw) >>> db.read_csv(dbcsv, **kw) >>> db.read_csv(f, **kw)
- Inputs:
- db:
DataKit
Generic database
- fname:
str
Name of CSV file to read
- dbcsv:
cape.attdb.ftypes.csvfile.CSVFile
Existing CSV file
- f:
file
Open CSV file interface
- append:
True
| {False
} Option to combine cols with same name
- save, SaveCSV:
True
| {False
} Option to save the CSV interface to db._csv
- db:
- See Also:
- Versions:
2019-12-06
@ddalle
: Version 1.0
- read_csvsimple(fname, **kw)¶
Read data from a simple CSV file
- Call:
>>> db.read_csvsimple(fname, **kw) >>> db.read_csvsimple(dbcsv, **kw) >>> db.read_csvsimple(f, **kw)
- Inputs:
- db:
DataKit
Generic database
- fname:
str
Name of CSV file to read
- dbcsv:
cape.attdb.ftypes.csvfile.CSVSimple
Existing CSV file
- f:
file
Open CSV file interface
- save, SaveCSV:
True
| {False
} Option to save the CSV interface to db._csv
- db:
- See Also:
- Versions:
2019-12-06
@ddalle
: Version 1.0
- read_mat(fname, **kw)¶
Read data from a version 5
.mat
file- Call:
>>> db.read_mat(fname, **kw) >>> db.read_mat(dbmat, **kw)
- Inputs:
- db:
DataKit
Generic database
- fname:
str
Name of
.mat
file to read- dbmat:
cape.attdb.ftypes.mat.MATFile
Existing MAT file interface
- save, SaveMAT:
True
| {False
} Option to save the MAT interface to db._mat
- db:
- See Also:
cape.attdb.ftypes.mat.MATFile
- Versions:
2019-12-17
@ddalle
: Version 1.0
- read_rbf_csv(fname, **kw)¶
Read RBF directly from a CSV file
- Call:
>>> db.read_rbf_csv(fname, **kw)
- Inputs:
- db:
DataKit
Generic database
- fname:
str
Name of CSV file to read
- db:
- See Also:
- Versions:
2021-06-17
@ddalle
: Version 1.0
- read_textdata(fname, **kw)¶
Read data from a simple CSV file
- Call:
>>> db.read_textdata(fname, **kw) >>> db.read_textdata(dbcsv, **kw) >>> db.read_textdata(f, **kw)
- Inputs:
- db:
DataKit
Generic database
- fname:
str
Name of CSV file to read
- dbcsv:
cape.attdb.ftypes.textdata.TextDataFile
Existing CSV file
- f:
file
Open CSV file interface
- save: {
True
} |False
Option to save the CSV interface to db._csv
- db:
- See Also:
- Versions:
2019-12-06
@ddalle
: Version 1.0
- read_tsv(fname, **kw)¶
Read data from a space-separated file
- Call:
>>> db.read_tsv(fname, **kw) >>> db.read_tsv(dbtsv, **kw) >>> db.read_tsv(f, **kw)
- Inputs:
- db:
DataKit
Generic database
- fname:
str
Name of TSV file to read
- dbcsv:
cape.attdb.ftypes.tsvfile.TSVFile
Existing TSV file
- f:
file
Open TSV file handle
- append:
True
| {False
} Option to combine cols with same name
- save, SaveTSV:
True
| {False
} Option to save the TSV interface to db.sources
- db:
- See Also:
cape.attdb.ftypes.tsvfile.CSVFile
- Versions:
2019-12-06
@ddalle
: Version 1.0 (read_csv()
)2021-01-14
@ddalle
: Version 1.0
- read_tsvsimple(fname, **kw)¶
Read data from a simple TSV file
- Call:
>>> db.read_tsvsimple(fname, **kw) >>> db.read_tsvsimple(dbcsv, **kw) >>> db.read_tsvsimple(f, **kw)
- Inputs:
- db:
DataKit
Generic database
- fname:
str
Name of TSV file to read
- dbtsv:
cape.attdb.ftypes.tsvfile.TSVSimple
Existing TSV file
- f:
file
Open TSV file interface
- save, SaveTSV:
True
| {False
} Option to save the TSV interface to db.sources
- db:
- See Also:
- Versions:
2019-12-06
@ddalle
: Version 1.0 (read_csvsimple)2021-01-14
@ddalle
: Version 1.0
- read_xls(fname, **kw)¶
Read data from an
.xls
or.xlsx
file- Call:
>>> db.read_xls(fname, **kw) >>> db.read_xls(dbxls, **kw) >>> db.read_xls(wb, **kw) >>> db.read_xls(ws, **kw)
- Inputs:
- db:
DataKit
Generic database
- dbxls:
cape.attdb.ftypes.xls.XLSFile
Existing XLS file interface
- fname:
str
Name of
.xls
or.xlsx
file to read- sheet: {
0
} |int
|str
Worksheet name or number
- wb:
xlrd.book.Book
Open workbook (spreadsheet file)
- ws:
xlrd.sheet.Sheet
Direct access to a worksheet
- skiprows: {
None
} |int
>= 0 Number of rows to skip before reading data
- subrows: {
0
} |int
> 0 Number of rows below header row to skip
- skipcols: {
None
} |int
>= 0 Number of columns to skip before first data column
- maxrows: {
None
} |int
> skiprows Maximum row number of data
- maxcols: {
None
} |int
> skipcols Maximum column number of data
- save, SaveXLS:
True
| {False
} Option to save the XLS interface to db._xls
- db:
- See Also:
cape.attdb.ftypes.xls.XLSFile
- Versions:
2019-12-06
@ddalle
: Version 1.0
- regularize_by_griddata(cols, args=None, **kw)¶
Regularize col(s) to full-factorial matrix of several args
The values of each arg to use for the full-factorial matrix are taken from the db.bkpts dictionary, usually generated by
get_bkpts()
. The values in db.bkpts, however, can be set manually in order to interpolate the data onto a specific matrix of points.- Call:
>>> db.regularize_by_griddata(cols=None, args=None, **kw)
- Inputs:
- db:
DataKit
Database with response toolkit
- cols:
list
[str
] List of output data columns to regularize
- args: {
None
} |list
[str
] List of arguments; default from db.response_args
- scol: {
None
} |str
|list
Optional name of slicing col(s) for matrix
- cocols: {
None
} |list
[str
] Other dependent input cols; default from db.bkpts
- method: {
"linear"
} |"cubic"
|"nearest"
Interpolation method;
"cubic"
only for 1D or 2D- rescale:
True
| {False
} Rescale input points to unit cube before interpolation
- tol: {
1e-4
} |float
Default tolerance to use in combination with slices
- tols: {
{}
} |dict
Dictionary of specific tolerances for single cols
- translators:
dict
[str
] Alternate names; col -> trans[col]
- prefix:
str
|dict
Universal prefix or col-specific prefixes
- suffix:
str
|dict
Universal suffix or col-specific suffixes
- v, verbose:
True
| {False
} Verbosity flag
- db:
- Versions:
2020-03-10
@ddalle
: Version 1.0
- regularize_by_rbf(cols, args=None, **kw)¶
Regularize col(s) to full-factorial matrix of several args
The values of each arg to use for the full-factorial matrix are taken from the db.bkpts dictionary, usually generated by
get_bkpts()
. The values in db.bkpts, however, can be set manually in order to interpolate the data onto a specific matrix of points.- Call:
>>> db.regularize_by_rbf(cols=None, args=None, **kw)
- Inputs:
- db:
DataKit
Database with response toolkit
- cols:
list
[str
] List of output data columns to regularize
- args: {
None
} |list
[str
] List of arguments; default from db.response_args
- scol: {
None
} |str
|list
Optional name of slicing col(s) for matrix
- cocols: {
None
} |list
[str
] Other dependent input cols; default from db.bkpts
- function: {
"cubic"
} |str
Basis function for
scipy.interpolate.Rbf
- tol: {
1e-4
} |float
Default tolerance to use in combination with slices
- tols: {
{}
} |dict
Dictionary of specific tolerances for cols*
- translators:
dict
[str
] Alternate names; col -> trans[col]
- prefix:
str
|dict
Universal prefix or col-specific prefixes
- suffix:
str
|dict
Universal suffix or col-specific suffixes
- db:
- Versions:
2018-06-08
@ddalle
: Version 1.02020-02-24
@ddalle
: Version 2.0
- remove_mask(mask, cols=None)¶
Remove cases in a mask for one or more cols
This function is the opposite of
apply_mask()
- Call:
>>> db.remove_mask(mask, cols=None) >>> db.remove_mask(mask_index, cols=None)
- Inputs:
- db:
DataKit
Database with scalar output functions
- mask: {
None
} |np.ndarray
[bool
] Logical mask of
True
/False
values- mask_index:
np.ndarray
[int
] Indices of values to delete
- cols: {
None
} |list
[str
] List of columns to subset (default is all)
- db:
- Effects:
- db[col]:
list
|np.ndarray
Subset db[col][mask] or similar
- db[col]:
- Versions:
2021-09-10
@ddalle
: Version 1.0
- rename_col(col1, col2)¶
Rename a column from col1 to col2
- Call:
>>> db.rename_col(col1, col2)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col1:
str
Name of column in db to rename
- col2:
str
Renamed title for col1
- db:
- Versions:
2021-09-10
@ddalle
: Version 1.0
- rstrip_colname(col, suffix)¶
Remove a suffix from a column name
This maintains component names, so for example if col is
"bullet.CLMX"
, and suffix is"X"
, the result is"bullet.CLM"
.- Call:
>>> newcol = db.rstrip_colname(col, suffix)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of column to strip
- suffix:
str
Suffix to remove from column name
- db:
- Outputs:
- newcol:
str
Prefixed name
- newcol:
- Versions:
2020-03-24
@ddalle
: Version 1.0
- semilogy_raw(x, y, **kw)¶
Plot 1D data sets directly, without response functions
- Call:
>>> h = db.semilogy_raw(xk, yk, **kw) >>> h = db.semilogy_raw(xv, yv, **kw)
- Inputs:
- db:
DataKit
Database with scalar output functions
- xk:
str
Name of col to use for x-axis
- yk:
str
Name of col to use for y-axis
- xv:
np.ndarray
Directly specified values for x-axis
- yv:
np.ndarray
Directly specified values for y-axis
- mask:
np.ndarray
[bool
|int
] Mask of which points to include in plot
- db:
- Outputs:
- h:
plot_mpl.MPLHandle
Object of
matplotlib
handles
- h:
- Versions:
2021-01-05
@ddalle
: Version 1.0; forkplot_raw()
- sep_response_kwargs(col, **kw)¶
Separate kwargs used for response and other options
- Call:
>>> kwr, kwo = db.sep_response_kwargs(col, **kw)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of data column to look up or calculate
- kw:
dict
Keyword args to
__call__()
or other methods
- db:
- Outputs:
- kwr:
dict
Keyword args to
__call__()
or other methods
- kwr:
- Versions:
2020-04-24
@ddalle
: Version 1.0
- set_arg_converter(k, fn)¶
Set a function to evaluation argument for a specific argument
- Call:
>>> db.set_arg_converter(k, fn)
- Inputs:
- db:
DataKit
Database with scalar output functions
- k:
str
Name of evaluation argument
- fn:
function
Conversion function
- db:
- Versions:
2019-02-28
@ddalle
: Version 1.02019-12-18
@ddalle
: Ported fromtnakit
- set_arg_default(k, v)¶
Set a default value for an evaluation argument
- Call:
>>> db.set_arg_default(k, v)
- Inputs:
- db:
DataKit
Database with scalar output functions
- k:
str
Name of evaluation argument
- v:
float
Default value of the argument to set
- db:
- Versions:
2019-02-28
@ddalle
: Version 1.02019-12-18
@ddalle
: Ported fromtnakit
- set_col_png(col, png)¶
Set name/tag of PNG image to use when plotting col
- Call:
>>> db.set_col_png(col, png)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Data column for to associate with png
- png:
str
Name/abbreviation/tag of PNG image to use
- db:
- Effects:
- db.col_pngs:
dict
Entry for col set to png
- db.col_pngs:
- Versions:
2020-04-01
@jmeeroff
: Version 1.0
- set_col_seam(col, seam)¶
Set name/tag of seam curve to use when plotting col
- Call:
>>> db.set_col_seam(col, seam)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Data column for to associate with png
- seam:
str
Name/abbreviation/tag of seam curve to use
- db:
- Effects:
- db.col_seams:
dict
Entry for col set to png
- db.col_seams:
- Versions:
2020-04-02
@jmeeroff
: Version 1.0
- set_cols_png(cols, png)¶
Set name/tag of PNG image for several data columns
- Call:
>>> db.set_cols_png(cols, png)
- Inputs:
- db:
DataKit
Database with scalar output functions
- cols:
list
[str
] Data column for to associate with png
- png:
str
Name/abbreviation/tag of PNG image to use
- db:
- Effects:
- db.col_pngs:
dict
Entry for col in cols set to png
- db.col_pngs:
- Versions:
2020-04-01
@ddalle
: Version 1.0
- set_cols_seam(cols, seam)¶
Set name/tag of seam curve for several data columns
- Call:
>>> db.set_cols_seam(cols, seam)
- Inputs:
- db:
DataKit
Database with scalar output functions
- cols:
list
[str
] Data column for to associate with png
- seam:
str
Name/abbreviation/tag of seam curve to use
- db:
- Effects:
- db.col_seams:
dict
Entry for col in cols set to seam
- db.col_seams:
- Versions:
2020-04-02
@jmeeroff
: Version 1.0
- set_defn(col, defn, _warnmode=0)¶
Set a column definition, with checks
- Call:
>>> db.set_defn(col, defn, _warnmode=0)
- Inputs:
- db:
DataKit
Data container
- col:
str
Data column name
- defn:
dict
(Partial) definition for col
- _warnmode: {
0
} |1
|2
Warning mode for invalid defn keys or values
- db:
- Versions:
2020-03-06
@ddalle
: Documented
- set_ndim(col, ndim)¶
Set database dimension for column col
- Call:
>>> db.set_ndim(col, ndim)
- Inputs:
- db:
cape.attdb.rdbscalar.DBResponseLinear
Database with multidimensional output functions
- col:
str
Name of column to evaluate
- db:
- Outputs:
- ndim: {
0
} |int
Dimension of col in database
- ndim: {
- Versions:
2019-12-30
@ddalle
: Version 1.0
- set_output_ndim(col, ndim)¶
Set output dimension for column col
- Call:
>>> db.set_output_ndim(col, ndim)
- Inputs:
- db:
cape.attdb.rdbscalar.DBResponseLinear
Database with multidimensional output functions
- col:
str
Name of column to evaluate
- db:
- Outputs:
- ndim: {
0
} |int
Dimension of col at a single condition
- ndim: {
- Versions:
2019-12-30
@ddalle
: Version 1.0
- set_output_xargs(col, xargs)¶
Set list of args to output for column col
- Call:
>>> db.set_output_xargs(col, xargs)
- Inputs:
- db:
cape.attdb.rdbscalar.DBResponseLinear
Database with multidimensional output functions
- col:
str
Name of column to evaluate
- xargs:
list
[str
] List of input args to one condition of col
- db:
- Versions:
2019-12-30
@ddalle
: Version 1.02020-03-27
@ddalle
: From db.defns to db.response_xargs
- set_png_fname(png, fpng)¶
Set name of PNG file
- Call:
>>> db.set_png_fname(png, fpng)
- Inputs:
- db:
DataKit
Database with scalar output functions
- png:
str
Name used to tag this PNG image
- fpng:
str
Name of PNG file
- db:
- Effects:
- db.png_fnames:
dict
Entry for png set to fpng
- db.png_fnames:
- Versions:
2020-03-31
@ddalle
: Version 1.0
- set_png_kwargs(png, **kw)¶
Set evaluation keyword arguments for PNG file
- Call:
>>> db.set_png_kwargs(png, kw)
- Inputs:
- db:
DataKit
Database with scalar output functions
- png:
str
Name used to tag this PNG image
- kw: {
{}
} |dict
Options to use when showing PNG image
- db:
- Versions:
2020-04-01
@jmeeroff
: Version 1.02020-04-02
@ddalle
: UseMPLOpts
2020-05-26
@ddalle
: Combine existing png_kwargs
- set_response_acol(col, acols)¶
Set names of any aux cols related to primary col
- Call:
>>> db.set_response_acol(col, acols)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of data column to evaluate
- acols:
list
[str
] Name of aux columns required to evaluate col
- db:
- Versions:
2020-03-23
@ddalle
: Version 1.02020-04-21
@ddalle
: Rename eval_acols
- set_response_arg_aliases(col, aliases)¶
Set alias names for evaluation args for a data column
- Call:
>>> db.set_response_arg_aliases(col, aliases)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of data column to evaluate
- aliases: {
{}
} |dict
Alternate names for args while evaluationg col
- db:
- Versions:
2019-12-30
@ddalle
: Version 1.0
- set_response_args(col, args)¶
Set list of evaluation arguments for a column
- Call:
>>> db.set_response_args(col, args)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of data column
- args:
list
[str
] List of arguments for evaluating col
- db:
- Effects:
- db.response_args:
dict
Entry for col set to copy of args w/ type checks
- db.response_args:
- Versions:
2019-12-28
@ddalle
: Version 1.02020-04-21
@ddalle
: Rename fromset_eval_args()
- set_response_func(col, fn)¶
Set specific callable for a column
- Call:
>>> db.set_response_func(col, fn)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of data column
- fn: callable |
None
Function or other callable entity
- db:
- Effects:
- db.response_methods:
dict
Entry for col set to method
- db.response_methods:
- Versions:
2019-12-28
@ddalle
: Version 1.0
- set_response_kwargs(col, kwargs)¶
Set evaluation keyword arguments for col evaluator
- Call:
>>> db.set_response_kwargs(col, kwargs)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of data column to evaluate
- kwargs: {
{}
} |dict
Keyword arguments to add while evaluating col
- db:
- Versions:
2019-12-30
@ddalle
: Version 1.0
- set_response_method(col, method)¶
Set name (only) of evaluation method
- Call:
>>> db.set_response_method(col, method)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of data column
- method:
str
Name of evaluation method (only checked for type)
- db:
- Effects:
- db.response_methods:
dict
Entry for col set to method
- db.response_methods:
- Versions:
2019-12-28
@ddalle
: Version 1.0
- set_seam_col(seam, xcol, ycol)¶
Set column names that define named seam curve
- Call:
>>> db.set_seam_col(seam, xcol, ycol)
- Inputs:
- db:
DataKit
Database with scalar output functions
- seam:
str
Name used to tag this seam curve
- xcol:
str
Name of col for seam curve x coords
- ycol:
str
Name of col for seam curve y coords
- db:
- Effects:
- db.seam_cols:
dict
Entry for seam set to (xcol, ycol)
- db.seam_cols:
- Versions:
2020-03-31
@ddalle
: Version 1.0
- set_seam_kwargs(seam, **kw)¶
Set evaluation keyword arguments for seam curve
- Call:
>>> db.set_seam_kwargs(seam, kw)
- Inputs:
- db:
DataKit
Database with scalar output functions
- seam:
str
Name used to tag this seam curve
- kw: {
{}
} |dict
Options to use when showing seam curve
- db:
- Versions:
2020-04-02
@jmeeroff
: Version 1.02020-05-26
@ddalle
: Combine existing png_kwargs
- set_uq_acol(ucol, acols)¶
Set name of extra data cols needed to compute UQ col
- Call:
>>> db.set_uq_acol(ucol, acols)
- Inputs:
- db:
DataKit
Database with scalar output functions
- ucol:
str
Name of UQ column to evaluate
- acols:
None
|list
[str
] Name of extra columns required for estimate ucol
- db:
- Versions:
2020-03-23
@ddalle
: Version 1.02020-05-08
@ddalle
: Remove if acols isNone
- set_uq_afunc(ucol, afunc)¶
Set function to UQ column if aux cols are present
- Call:
>>> db.set_uq_afunc(ucol, afunc)
- Inputs:
- db:
DataKit
Database with scalar output functions
- ucol:
str
Name of UQ col to estimate
- afunc: callable
Function to estimate ucol
- db:
- Versions:
2020-03-23
@ddalle
: Version 1.02020-05-08
@ddalle
: Remove if afunc isNone
- set_uq_col(col, ucol)¶
Set uncertainty column name for given col
- Call:
>>> db.set_uq_col(col, ucol)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of data column
- ucol:
None
|str
Name of column for UQ of col (remove if
None
)
- db:
- Effects:
- db.uq_cols:
dict
Entry for col set to ucol
- db.uq_cols:
- Versions:
2020-03-20
@ddalle
: Version 1.02020-05-08
@ddalle
: Remove if ucol isNone
- set_uq_ecol(ucol, ecols)¶
Get name of any extra cols required for a UQ col
- Call:
>>> db.get_uq_ecol(ucol, ecol) >>> db.get_uq_ecol(ucol, ecols)
- Inputs:
- db:
DataKit
Database with scalar output functions
- ucol:
str
Name of UQ column to evaluate
- ecol:
str
Name of extra column required for ucol
- ecols:
list
[str
] Name of extra columns required for ucol
- db:
- Versions:
2020-03-21
@ddalle
: Version 1.02020-05-08
@ddalle
: Remove if ecols isNone
- set_uq_efunc(ecol, efunc)¶
Set function to evaluate extra UQ column
- Call:
>>> db.set_uq_ecol_funcs(ecol, efunc)
- Inputs:
- db:
DataKit
Database with scalar output functions
- ecol:
str
Name of (correlated) UQ column to evaluate
- efunc:
None
| callable Function to evaluate ecol
- db:
- Versions:
2020-03-21
@ddalle
: Version 1.02020-05-08
@ddalle
: Remove if efunc isNone
- sort(cols=None)¶
Sort (ascending) using list of cols
- Call:
>>> db.sort(cols=None)
- Inputs:
- db:
DataKit
Data interface with response mechanisms
- cols: {
None
} |list
[str
] List of columns on which t sort, with highest sort priority to the first col, later cols used as tie-breakers
- db:
- Versions:
2021-09-17
@ddalle
: Version 1.0
- substitute_prefix(col, prefix1, prefix2)¶
Remove a prefix from a column name
This maintains component names, so for example if col is
"bullet.CLMF"
, prefix1 is"CLM"
, suffix2 is"CN"
, and the result is"bullet.CNF"
.- Call:
>>> newcol = db.substitute_prefix(col, prefix1, prefix2)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of column to strip
- prefix1:
str
Prefix to remove from column name
- prefix2:
str
Prefix to add to column name
- db:
- Outputs:
- newcol:
str
Prefixed name
- newcol:
- Versions:
2020-03-24
@ddalle
: Version 1.0
- substitute_suffix(col, suffix1, suffix2)¶
Remove a suffix from a column name
This maintains component names, so for example if col is
"bullet.CLM"
, suffix1 is"LM"
, suffix2 is"N"
, and the result is"bullet.CN"
.- Call:
>>> newcol = db.substitute_suffix(col, suffix1, suffix2)
- Inputs:
- db:
DataKit
Database with scalar output functions
- col:
str
Name of column to strip
- suffix1:
str
Suffix to remove from column name
- suffix2:
str
Suffix to add to column name
- db:
- Outputs:
- newcol:
str
Prefixed name
- newcol:
- Versions:
2020-03-24
@ddalle
: Version 1.0
- write_csv(fname, cols=None, **kw)¶
“Write CSV file with full options
If db.sources has a CSV file, the database will be written from that object. Otherwise,
make_source()
is called.- Call:
>>> db.write_csv(fname, cols=None, **kw) >>> db.write_csv(f, cols=None, **kw)
- Inputs:
- db:
DataKit
Data container
- fname:
str
Name of file to write
- f:
file
File open for writing
- cols: {db.cols} |
list
[str
] List of columns to write
- kw:
dict
Keyword args to
CSVFile.write_csv()
- db:
- Versions:
2020-04-01
@ddalle
: Version 1.0
- write_csv_dense(fname, cols=None)¶
“Write dense CSV file
If db.sources has a CSV file, the database will be written from that object. Otherwise,
make_source()
is called.- Call:
>>> db.write_csv_dense(fname, cols=None) >>> db.write_csv_dense(f, cols=None)
- Inputs:
- db:
DataKit
Data container
- fname:
str
Name of file to write
- f:
file
File open for writing
- cols: {db.cols} |
list
[str
] List of columns to write
- db:
- Versions:
2019-12-06
@ddalle
: Version 1.02020-02-14
@ddalle
: Uniform “sources” interface
- write_mat(fname, cols=None, **kw)¶
“Write a MAT file
If db.sources has a MAT file, the database will be written from that object. Otherwise,
make_source()
is called.- Call:
>>> db.write_mat(fname, cols=None)
- Inputs:
- db:
DataKit
Data container
- fname:
str
Name of file to write
- f:
file
File open for writing
- cols: {db.cols} |
list
[str
] List of columns to write
- db:
- Versions:
2019-12-06
@ddalle
: Version 1.0
- write_rbf_csv(fcsv, coeffs, **kw)¶
Write an ASCII file of radial basis func coefficients
- Call:
>>> db.WriteRBFCSV(fcsv, coeffs=None, **kw)
- Inputs:
- db:
DataKit
Data container with responses
- fcsv:
str
Name of ASCII data file to write
- coeffs:
list
[str
] List of output coefficients to write
- fmts:
dict
|str
Dictionary of formats to use for each coeff
- comments: {
"#"
} |str
Comment character, used as first character of file
- delim: {
", "
} |str
Delimiter
- translators: {
{}
} |dict
Dictionary of coefficient translations, e.g. CAF -> CA
- db:
- Versions:
2019-07-24
@ddalle
: Version 1.0;WriteRBFCSV()
2021-06-09
@ddalle
: Version 2.0
- write_tsv(fname, cols=None, **kw)¶
“Write TSV file with full options
If db.sources has a TSV file, the database will be written from that object. Otherwise,
make_source()
is called.- Call:
>>> db.write_tsv(fname, cols=None, **kw) >>> db.write_tsv(f, cols=None, **kw)
- Inputs:
- db:
DataKit
Data container
- fname:
str
Name of file to write
- f:
file
File open for writing
- cols: {db.cols} |
list
[str
] List of columns to write
- kw:
dict
Keyword args to
TSVFile.write_tsv()
- db:
- Versions:
2020-04-01
@ddalle
: Version 1.0 (write_csv)2021-01-14
@ddalle
: Version 1.0
- write_tsv_dense(fname, cols=None)¶
“Write dense TSV file
If db.sources has a TSV file, the database will be written from that object. Otherwise,
make_source()
is called.- Call:
>>> db.write_tsv_dense(fname, cols=None) >>> db.write_tsv_dense(f, cols=None)
- Inputs:
- db:
DataKit
Data container
- fname:
str
Name of file to write
- f:
file
File open for writing
- cols: {db.cols} |
list
[str
] List of columns to write
- db:
- Versions:
2019-12-06
@ddalle
: Version 1.0 (write_csv_dense)2021-01-14
@ddalle
: Version 1.0
- write_xls(fname, cols=None, **kw)¶
“Write XLS file with full options
If db.sources has a XLS file, the database will be written from that object. Otherwise,
make_source()
is called.- Call:
>>> db.write_xls(fname, cols=None, **kw) >>> db.write_xls(wb, cols=None, **kw)
- Inputs:
- db:
DataKit
Data container
- fname:
str
Name of file to write
- wb:
xlsxwriter.Workbook
Opened XLS workbook
- cols: {db.cols} |
list
[str
] List of columns to write
- kw:
dict
Keyword args to
CSVFile.write_csv()
- db:
- Versions:
2020-05-21
@ddalle
: Version 1.0