Launching the Solver

Once the grid has been preprocessed and partitioned and the NSU3D input file has been constructed, NSU3D is invoked as:

  • mpirun -np /nsu3d input.nsu3d

or alternatively as

  • mpirun -np /nsu3d input.nsu3d > nsu3d.out
  • mpirun -np /nsu3d input.nsu3d |tee nsu3d.out

using standard Unix utilities for storing the terminal output to a file. Note that to run jobs using mpi (message passing interface) either mpirun or mpiexec can be used, depending on the system setup.

specifies the number of processors/cores on which nsu3d is to be run.

This number must be consistent with the number of partitions in the grid file being read in (and specified in the input.nsu3d parameter input file). must be either equal to the number of partitions (fastest run time) or a factor of the number of partitions, in which case nsu3d will assign an equal number of partitions to each cpu or core. For example, if the grid is divided into 16 partitions, and resides in the directory grid.part.16, then values =16,8,4,2 or 1 can be used (provided enough memory is available on each compute node). On the other hand, values such as 32, 17, or 15 are not allowed and will cause the solver to stop execution.

The next argument to mpirun or mpiexec is the executable name for nsu3d, which may require a complete pathname to be visible from the location where mpirun is being invoked.

NSU3D is typically compiled with the mpi libraries and should be run as shown above. However, it is possible to compile NSU3D without the mpi libraries (see example) in which case it should be run as a regular executable i.e.: /nsu3d input.nsu3d NSU3D Inputs: Running NSU3D requires a parameter input file, which is called input.nsu3d in this example. The parameter input file is specified by listing the file name directly after the nsu3d executable name. A description of the input parameter file and sample input files can be found hereSolver Input File.

Solver Input File

Download a sample input list file for NSU3D here

A sample input list file for NSU3D is shown below. This input list is separated into various regions which deal with the general run parameters (Lines 1 to 17), the coarse grid parameters (Lines 19 to 23), the turbulence model parameters (Lines 25 to 33), the flow conditions (Line 36), the force coefficient definitions (Line 40), and the input file name (Line 43). The numerical values given in the sample file are in most cases optimal and should be used as the baseline values.



Input File Description
  • Line 1: contains a title for the particular case.
  • Line 3: controls the restart function. RESTARTF = 1.0 instructs the solver to read the initial flow field from the restart directory listed on Line 5. If RESTARTF = 0.0, no restart directory name is required and the flow is initialized with freestream values. If RESTARTF = 1.0 and RESTARTT = 0.0, then the flow solution is restarted from the restart directory, but the turbulence values are reinitialized to zero and recomputed from scratch. If RESTARTF = 1.0 and RESTARTT = 1.0, then both flow and turbulence values are read in from the restart directory. RNTCYC denotes the number of solution cycles to be performed on the turbulence model (with the flow field frozen) just after the restart directory has been read in. This option can be used for example to pre-converge the turbulence model, particularly if a flow field is read in from the restart directory, but the turbulence values are not read in, possible because an alternate turbulence model has been selected.
  • Line 5: The name of the restart directory is specified on Line 5. If RESTARTF = 0.0, this entry is ignored.
  • Line 7: MMESH is use for mesh sequencing, i.e. running on various meshes of the multigrid sequence. This parameter actually determines the number of lines of type Line 9 that are to follow. It is to be used in conjunction with the NMESH (number of multigrid meshes) and MESHLEVEL (identifies the mesh on which solution is computed) parameters. For example, a typical Full Multigrid (FMG) Mesh Sequencing algorithm would solve the flow on the coarsest mesh (MESHLEVEL = 4.0, for the case where four grid levels are available), using a single mesh in the multigrid sequence (NMESH = 1.0), and then interpolate the solution to the next finer mesh (MESHLEVEL = 3.0), solve the flow on this mesh using two meshes in the sequence (NMESH = 2.0), and then continue on each finer grid in this manner until the finest grid is reached (MESHLEVEL = 1.0). Each solution on a given grid level involves an entry of the type on Line 9, and the total of these entries must correspond to the MMESH value set here. In actual fact, the MMESH facility is more general than simply one which offers the possibility of performing mesh sequencing. Any sequence of mesh solutions can be prescribed. For example, a partially converged solution on the finest mesh can first be achieved using the single (non-multigrid) algorithm, and then the multigrid algorithm on the finest mesh can be invoked afterwards by using MMESH = 2.0, with the first line containing NMESH = 1.0, MESHLEVEL = 1.0 and the second line containing NMESH = 4.0, MESHLEVEL = 1.0. Additionally, the value MESHLEVEL = -1.0 enables the solution of the first order discretization on the finest grid level. This may be useful for pre-converging cases which experience start-up problems, thus increasing overall robustness of the solver. The example input file above shows an initial phase of full multigrid mesh sequencing, followed by a first order accurate multigrid solution phase on the finest mesh, followed by a single grid mesh solution phase on the finest grid, followed by the second order accurate multigrid solution on the finest grid, which yields the final result. A good strategy for increasing robustness at startup is to perform 10 or 20 single grid cycles or first-order accurate multigrid cycles on the finest grid, followed by the second order accurate multigrid solution for several hundred cycles. Full multigrid mesh sequencing in general does not provide substantial convergence acceleration over the entire solution process, and is not often invoked. It can however be used to diagnose a problem with one of the agglomerated coarse levels. Thus, in general, a value MMESH = 2.0 is prescribed, while using only lines 9.5 and 9.6 (or 9.4 and 9.6). NTHREAD denotes the number of OpenMP threads to be used during parallel execution. For a hybrid MPI-OpenMP run, this refers to the number of threads running under each MPI process. On some systems it may also be necessary to set the OMP_NUM_THREADS environment variable to enable the requested number of threads to be employed. If for such a reason NTHREAD threads cannot be spawned, nsu3d will terminate with a message to that effect.
  • Line 9: This line should be replicated (with changes) MMESH times. Each instance of this line refers to the solution on a particular mesh of the multigrid sequence, and defines the parameters required for that solution process. NCYC specifies the number of (multigrid) cycles to be executed. The maximum eddy viscosity computed throughout the entire flow field is printed out every NPRNT cycles. N MESH specifies the number of multigrid levels (including the fine grid). The minimum value is 1, which reproduces a single grid algorithm, and the maximum value is NLEVELS + 1, where NLEVELS is the value specified in the AMG3D input list when constructing the *.amg file for this run. MESHLEVEL specifies the mesh in the multigrid sequence on which the solution is to be obtained. This is used in grid sequencing or preconditioning the solution by performing single fine grid iterations and/or first-order accurate fine grid iterations. MESHLEVEL = 1.0 always refers to the finest grid of the sequence. MESHLEVEL = -1.0 also refers to the finest grid of the sequence, but switches the discretization to a first-order accurate form which is more rapidly converged. MESHLEVEL = 2.0 refers to the second mesh in the sequence, i.e. the first coarse multigrid mesh. MESHLEVEL = 3.0 refers to the next coarser level, and so on. Since the coarse multigrid levels are based on agglomeration, a full second order discretization on these coarse levels is not possible, so it is important to remember that all MESHLEVEL > 1.0 grids are only first order accurate. CFLMIN and RAMPCYC are used to ramp up the CFL number for cases with start-up difficulty. The initial CFL number is given by CFLMIN, which is then ramped up to the final value CFL (Line 11) linearly over RAMPCYC cycles. TURBFREEZE has the effect of freezing the turbulence model after TURBFREEZE multigrid cycles. A value TURBFREEZE = 0.0 omits any freezing action, while a value TURBFREEZE = -1.0 initiates freezing immediately after initialization.
  • Line 11 contains the following solver controls: CFL is the CFL number, which scales the local time-step size. The particular CFL value depends on whether residual smoothing is used (SMOOP and NCYCSM), and the number of Runge-Kutta stages (C1-C6). A value of CFL = 1.0 has been found to work best with the 3-stage scheme shown in this example. An alternate 5-stage scheme (with coefficients : C(1-5) = 0.25, 0.16667, 0.375, 0.5, 1.0, and FIL(1-5) = 1.0, 0.0, 0.56, 0.0, 0.44) works well with the value CFL = 2.5. CFLV is not functional in this version. Use the default value of 1000. ITACC is not functional in this version. INVBC selects the way in which the wall boundary condition is applied for slip velocity flows, such as those encountered in inviscid flows at walls, or when using wall functions. INVBC = 0.0 results in floating velocity vectors at the wall (not necessarily tangential), with vanishing normal flux specified through the wall, while INVBC = 1.0 explicitly sets the velocity vectors to be tangential to the wall at inviscid wall boundaries. ITWALL TWALL
  • Line 13: VIS1 and VIS2 specify the artificial dissipation. Generally, VIS1 specifies the coefficient of first order dissipation (based on second differences) used only in the vicinity of shock waves, and VIS2 defines the level of background 2nd order accurate dissipation (based on fourth differences). Because the first-order dissipation can severely degrade overall accuracy if it is triggered near leading edges, it is common practice to set VIS1 = 0.0 which may produce some shock oscillations for transonic cases. For the baseline artificial dissipation scheme, the values VIS1 = 0.0 and VIS2 = 20. generally produces good overall accuracy. A lower value of VIS2 = 10. is often used and generally results in lower dissipation/more accurate solutions, although VIS2 = 20. may be required for cases that are difficult to converge. These values are only valid for the (matrix or scalar) artificial dissipation discretization. For the upwind scheme (see IFLUX optional input parameter), the values VIS1 = 0.0 and VIS2 = 1.0 should be used. Note that for supersonic cases, the upwind scheme is usually required due to the presence of a strong limiter in this scheme. HFACTOR specifies the amount of enthalpy damping to be used. Enthalpy damping is a technique to speed convergence for isenthalpic flows. For Navier-Stokes flow, enthalpy damping should be turned off: HFACTOR = 0.0 For inviscid flows, HFACTOR = 0.25 can be used. SMOOP and NCYCSM are not active in the current version of the solver.
  • Line 15: C1 - C6 specify the Runge-Kutta coefficients for the multi-stage time-stepping scheme. In general, the 3-stage scheme described in this example is used and the values of these coefficients need not be changed. An alternate 5-stage scheme contains the values: C(1-5) = 0.25, 0.16667, 0.375, 0.5, 1.0, and FIL(1-5) = 1.0, 0.0, 0.56, 0.0, 0.44, and CFL = 2.5
  • Line 17: FIL1 - FIL6 specify the coefficients for the dissipative terms for the multi-stage time-stepping scheme. The values of these coefficients need not be changed as long as the 3-stage scheme is employed. The values depicted above can used for the 5-stage scheme.
  • Line 21: CFLC (Line 21) defines the CFL number used on the coarse multigrid levels. Generally CFLC should have the same value as CFL. CFLVC, SMOOPC and NSMOOC are not active in this version of the solver.
  • Line 23: VIS0 determines the level of artificial dissipation on the coarse multigrid levels (first-order accurate only). Higher values of VIS0 will provide additional robustness at the expense of speed of convergence. The value VIS0 = 4.0 can be used almost exclusively, although values up to VIS0 = 6.0 can be used for additional robustness for difficult cases. MGCYC determines the type of multigrid cycle to be employed. MGCYC = 1.0 corresponds to a multigrid V-cycle, while MGCYC = 2.0 corresponds to a multigrid W-cycle. MGCYC = 2.0 is generally delivers faster convergence overall. SMOOMG and NSMOOMG determine the amount of smoothing applied to the coarse grid corrections after they are interpolated to the next finer grid level. This smoothing operation is similar to that employed for the implicit residual smoothing operation. SMOOMG and NSMOOMG therefore have meanings similar to SMOOP and NCYCSM. The optimal values have been found to be SMOOMG = 0.2 to 0.8, and NSMOOMG = 2.0. Higher values such as SMOOMG = 0.8 and NSMOOMG = 3.0 can occasionally be used for additional robustness (at the expense of speed of convergence).
  • Line 27: ITURB selects the physical model or turbulence model to be used. ITURB = 0.0 results in an inviscid flow (Euler) computation. ITURB = 1.0 results in a laminar flow computation ( no turbulence effects). ITURB = 4.0 selects the Spalart-Allmaras one-equation turbulence model. IWALL should aways be set = 0.0 in this version.
  • Line 29: CT1 - CT6 are the stage coefficients for the turbulence model on the fine grid. The turbulence model is solved simultaneously but decoupled from the flow equations. At each stage in the multi-stage flow time-stepping, a turbulence model iteration can be performed. Using more turbulence iterations than flow solution stages is not permitted. Unlike those for the flow solver, these turbulence stage coefficients can only take on 3 values: CT = 0.0 Omits time-stepping the turbulence equations at this stage. CT = 1.0 selects the tridiagonal line solver for the turbulence model at this stage. CT =-1.0 selects the point-wise solver for the turbulence model at this stage. In general, the value CT = 1.0 should be use at every stage corresponding to a flow solution stage.
  • Line 31: CTC1 - CTC6 are the stage coefficients for the turbulence model on the coarse grids. These can take on the same values as described above for the CT fine grid coefficients. When all CTC = 0.0, only fine grid iterations are performed on the turbulence model.
  • Line 33: VIST0 represents the amount of 1st order dissipation employed on the coarse grid levels for the turbulence model. This dissipation can make the multigrid procedure more robust by stabilizing coarse grid iterations, although this comes at the expense of slower overall convergence of the turbulence model. Values between 0.0 and 6.0 have been employed. TSMOOMG and NTSMOOMG are analogous to the SMOOMG NSMOOMG parameters described on Line 23. They determine the amount of smoothing applied to the coarse grid corrections for the turbulence model after they are interpolated to the next finer grid level. The optimal values have been found to be TSMOOMG = 0.2 to 0.8, and NTSMOOMG = 2.0. Higher values such as TSMOOMG = 0.8 and NTSMOOMG = 3.0 can occasionally be used for additional robustness (at the expense of speed of convergence).
  • Line 36: sets the freestream flow conditions. For a new solution, the flow field is initialized as a uniform flow with these conditions, and the far-field boundary maintains these conditions throughout the solution phase. For a restarted solution, the outer boundary only is affected by these conditions. (Changing the Reynolds number affects the viscosity values in the simulation and is not related to boundary or initial conditions). MACH : sets the freestream Mach number. Z-ANGLE: sets the flow angle relative to the z-axis: for a coordinate system where y (or z) is spanwise, this corresponds to the yaw angle (or incidence angle). Y-ANGLE: sets the flow angle relative to the y-axis: for a coordinate system where y (or z) is spanwise, this corresponds to the incidence angle (or yaw angle). RE: sets the Reynolds number of the flow, based on the distance RE_LENGTH. Thus for RE_LENGTH = 1.0, a Reynolds number of RE per unit length in the grid dimensions is employed.
  • Line 40 defines the values for the force coefficient calculation. These include a reference area (REF_AREA) in grid dimensions (squared), a reference length (REF_LENGTH) in grid dimensions, the location of the point about which the moment coefficients are to be computed (XMOMENT, YMOMENT, ZMOMENT) and a definition of which coordinate is the spanwise coordinate (ISPAN = 2.0 for y-spanwise, ISPAN=3.0 for z-spanwise), since this affects the definition of lift, drag and side-force.
  • Line 43: specifies the directory for the partitioned grid files to be read by the solver. Only the directory name is specified here, not any individual files. The format is always set equal to 2.0.

Optional Parameters

Optional parameters can be specified in a list at the end of the input file.

This is done by specifying the name of the parameter, and its value on the same line.

The value must be in floating point decimal form, and must line up with the header: VALUE

The listed parameters and their values are output at the start of the run, providing the user with a check of the values read in.

The following is a list of all of the available optional parameters sorted by function with the typical values used for each.

BETA_MIN: relates exclusively to low Mach number preconditioning.

This is an optional parameter and is only active for IPC_LOW_MACH = -1 In this case, BETA_MIN may take on values between 0 and 1. Generally (i.e. IPC_LOW_MACH = +1) low Mach number preconditioning uses the value BETA_MIN = min(1,3 x Mach**2) by default. Lower values of BETA_MIN may be less stable while providing more low Mach number preconditioning effect. Higher numbers produce the opposite, with the effect of low-Mach number preconditioning vanishes for BETA_MIN=1.
BIH_BNDY_FACTR: specifies the solid wall boundary treatment of artificial dissipation (Active for IFLUX_TYPE=0 only). Second differences at the wall are scaled by BIH_BNDY_FACTR prior to calculating the full biharmonic (4th difference) dissipation terms. The two extreme values correspond to:

BIH_BNDY_FACTR = 0: Omit second differences on wall in construction of biharmonic dissipation (Better skin friction, possibly lower robustness)

BIH_BNDY_FACTR = 1: Do not modify second differences at wall.
BIHFACTR: is a factor which scales the linearization of the biharmonic dissipation terms in the Jacobian.

Generally, a 1st order Jacobian is used for the point or line implicit solution algorithm.

When using a biharmonic (artificial) dissipation scheme, there is no simple correspondance between the linearization of these terms and a 1st order (2nd difference dissipation) Jacobian. Therefore, we use the nearest neighbor entries from the biharmonic construction, and scale these by the factor BIHFACTR.

BIHFACTR = 20.0 : Standard Value. Higher value produces more diagonally dominant matrix (slower and more robust convergence) while lower values produce the opposite.
C_DES: parameter for DES length scale, set to 0.65 according to DES97
CP_FACTOR: For non-converging boundary layer code, scale Cps by factor CP_FACTOR and rerun IBL station.

Not active on Nash IBL code.

Recommended value: 0.5
EFACTRC1: Value of entropy fix for COARSE GRID (in multigrid) Discretization Limit value of minimum eigenvalue by EFACTRC1 of maximum eigenvalue.

EFACTRC1 = 0.0 : No limiting
EFACTRC1 = 1.0 : Scalar Dissipation Scheme (slower, more robust) Affects only convergence, not final solution (fine grid discretization)
EFACTRC2: Value of entropy fix for COARSE GRID (in multigrid) Jacobian Limit value of minimum eigenvalue by EFACTRC2 of maximum eigenvalue.

EFACTRC2 = 0.0 : No limiting
EFACTRC2 = 1.0 : Scalar Dissipation Scheme (slower convergence, more robust) Affects only convergence, not final solution (fine grid discretization)
EFACTRF1: Value of entropy fix for FINE GRID Discretization Limit value of minimum eigenvalue by EFACTRF1 of maximum eigenvalue.

EFACTRF1 = 0.0 : No limiting
EFACTRF1 = 1.0 : Scalar Dissipation Scheme (slower convergence, more diffusion, more robust) Affects convergence, AND final solution (fine grid discretization)
EFACTRF2: Value of entropy fix for FINE GRID Jacobian Limit value of minimum eigenvalue by EFACTRF2 of maximum eigenvalue.

EFACTRF2 = 0.0 : No limiting
EFACTRF2 = 1.0 : Scalar Dissipation Scheme (slower convergence, more robust) Affects only convergence, not final solution (fine grid Jacobian only)
FACTR_MG: Can be used to enhance robustness of multigrid by decreasing size of transfered residuals to coarse grids. And in turn increasing corrections back to fine grid. Overall convergence history should be identical, but non-linear instabilities on coarse grids may be avoided.

NOTE: FACTR_MG=1 recommended. Other values seldom used.
FACTR_MG > 1: Rescale for robustness.
FACTR_MG = 1: Baseline Value, no rescaling of multigrid terms
FD_DES: parameter for selecting DES or DDES
FD_DES = 1 will run DES97 (default)
FD_DES = -1 will run DDES2006
FD_DES = 0 will run the RANS
FGEOM_FRAC: Sets Maximum fraction of limit grid points. When limiting based on grid cell geometry is established (prior to flow calculations), if more than FGEOM_FRAC * nnode grid points are found to be limited (nnode = total number of grid points), execution is halted.

FGEOM_FRAC > 0 : Use values between 0 and 1.0
FGEOM_FRAC > 1.0 : Limit is Inactive
FK_LIMIT: Parameter in Venkataskrishnan TVB limiter. Low values tend to weaker enforcement of monotonicity, high values tend toward original monotone limiter.

FK_LIMIT = 0.0: Removes effect of limiter, acts as unlimited discretization
LIMIT_inf -> Monotone : Limiter FK
Typical Values:
5.0 < FK_LIMIT < 100.0
Only active for : IFLUX_TYPE>0, and ILIM_TYPE = 1
FNSFACTR: Controls Navier-Stokes Discretization.

FNSFACTR = 0.0 : Use thin Layer Navier-Stokes using a Pseudo- Laplacian Operator Based on Mesh Edges This is the prefered option. The result is actually thin layer in all three directions, or full Navier-Stokes under the assumption of incompressibility and constant viscosity. This is also the most robust option.

FNSFACTR = 1.0 : Use Full Navier-Stokes terms using the above Pseudo-Laplacian complemented by second differences computed as gradients of gradients for corrections to Laplacian and inclusion of cross terms. (involves stencils of neighbors of neighbors) This may be less robust as exact Jacobians of these terms are not available.
FNSFACTT: Controls Turbulence model dissipation discretization.

FNSFACTT = 0.0 : Use a Pseudo- Laplacian Operator Based on Mesh Edges for Dissipation terms. This is the prefered option.

FNSFACTT = 1.0 : Correct the above Pseudo- Laplacian Operator Based on Mesh Edges for Dissipation terms using second derivatives computed as gradients of gradients (involves stencils of neighbors of neighbors) This may be less robust as exact Jacobians of these terms are not available.
FNSGRAD: Gradient construction options for Navier-Stokes Terms (only active when FNSFACTR = 1.0)

FNSGRAD = 1.0 : Use Green-Gauss Construction for Navier-Stokes Terms when FNSFACTR = 1.0.
FNSGRAD = 2.0 : Use Least-Squares Cosntruction for Navier-Stokes Terms when FNSFACTR = 1.0
FNSGRAT: Gradient construction options for Turbulence Model Dissipative Terms (only active when FNSFACTT = 1.0)

FNSGRAT = 1.0 : Use Green-Gauss Construction for Turbulence Model Dissipative Terms when FNSFACTT = 1.0

FNSGRAT = 2.0 : Use Least-Squares Cosntruction for Turbulence Model Dissipative Terms when FNSFACTT = 1.0
ICHK_PARALLEL: Enable/Disable Parallel output for Checkpoint Files Checkpoint files are written at regular intervals. Parallel output is implemented as all processors simultaneously writing to a common file system. On some systems, this causes the Network File System (NFS) to overload. To avoid this, ICHK_PARALLEL=0 instructs each processor to write to the common file system in sequence, one at a time, while the other processors wait their turn. This is slower but more robust for many NFS systems.

ICHK_PARALLEL = 0 : Sequential output to common file system using I/O from one processor at a time.
ICHK_PARALLEL = 1 : Parallel output to common file system using I/O from all processors simultaneously.
IFINE_BNDY_DISSIP: Controls modification of boundary dissipation for schemes other than biharmonic dissipation (IFLUX_TYPE=0) IFINE_BNDY_DISSIP = -1 : Zero out dissipation for tangential boundary condition points (see innerbc_0.f)

IFINE_BNDY_DISSIP = 0 : No change, as expected on all grid levels
IFINE_BNDY_DISSIP = 1 : Use coarse grid dissipation values at ghost edge in rfluxb_0.f rfluxb_1.f
IFINE_BNDY_DISSIP = 2 : Use LIM_GEOM values at all boundary edges:
eg. values used for LIM_GEOM points (values set in set_lim_values.f) BUT this OPTION requires nsmoo_limit(LIM_GEOM) = 0.
IFLUX_TYPE: Specifies the type of flux or dissipation for the spatial discretization scheme. Currently 3 options are implemented:

IFLUX_TYPE = 0 : Original biharmonic matrix dissipation
IFLUX_TYPE = 1 : Roe Rieman Flux Difference Splitting
IFLUX_TYPE = 2 : Van-Leer Flux Vector Splitting
For Navier-Stokes flows, IFLUX_TYPE = 0 is recommended, as there are still issues of accuracy with the other schemes.
For Euler (inviscid) flows, IFLUX_TYPE = 1 is relatively robust for supersonic flows especially when using a limiter (see ILIM_TYPE).
IFREEZE_LIM: Specifies if and how to freeze limiters (for IFLUX_TYPE=1 or 2 only, inactive for IFLUX_TYPE=0) Convergence to steady-state can be disrupted by limiters which switch back and forth at each iteration. Freezing these limiters may enable or enhance otherwise poor convergence, but the final steady-state result may depend on the manner in which the limiters have been frozen and the convergence history (i.e. a restart solution may converge to slightly different results). In general these differences should be acceptably small.

IFREEZE_LIM = 0 : Never Freeze limiters
IFREEZE_LIM = 1 : Freeze all limiters at current values after NFREEZE_LIM iterations
IFREEZE_LIM = 2 : After NFREEZE_LIM iterations, begin a moving average of limiter values
IFREEZE_LIM = 3 : After NFREEZE_LIM iterations, use minimum limiter value of previous and next iteration (produces minimum value over all remaining iterations)
IFREEZE_GRAD: Used to enable freezing of gradient calculation within the stages of the Runge-Kutta multi-stage scheme. This speeds up execution, but may be less stable. Gradients are recomputed at each time-step in the first stage of the Rung-Kutta scheme, so the final converged result will be identical regardless of the value of IFREEZE_GRAD

IFREEZE_GRAD = 0 : Do not freeze any gradient calculations.

IFREEZE_GRAD = 1 : Freeze at all Runge-Kutta stages after 1st stage (but recompute at start of each new time step).
IGRAD_TYPE: Defines the values used in gradient reconstruction with IFLUX_TYPE=1,2 (IGRAD_TYPE is inactive for IFLUX_TYPE=0)

IGRAD_TYPE = 1 : Gradients are computed using Primitive Variables

IGRAD_TYPE = 2 : Gradients are computed using Conserved Variables
IGEOM_LIMIT: Option to use (1) or Discard (0) Grid Based Limiting

IGEOM_LIMIT = 0 : Discard
IGEOM_LIMIT = 1 : Enforce
(Recommend =0 Discard for this version)
ILIM_TYPE: Select Type of Limiter for Upwind Reconstruction Schemes (IFLUX_TYPE=1,2...Not active for IFLUX_TYPE=0)

ILIM_TYPE < 0 : First Order Discretization: Set All Gradients = 0.0

ILIM_TYPE = 0 : No Limiting, allow non-monotone solutions (least diffusive)

ILIM_TYPE = 1 : Venkatakrishnan Smooth (TVB) Limiter (less diffusive) --> Also select FK_LIMIT value

ILIM_TYPE = 2 : Barth Monotone Limiter (most diffusive)
ILINE_SOLVE: Select / Omit Line Solver. Line solver increases convergence for Navier-Stokes flows. Robustness problems have been encountered for IFLUX_TYPE=1,2. For IFLUX_TYPE= 0, line solver should always be used.

ILINE_SOLVE = 0 : Omit Line Solver. Use Point Jacobi only.

ILINE_SOLVE = 1 : Use Line Solver in Boundary Layer Regions.
IMESSAGE_LEVEL: Select Level of output messages when time-step limiting or other limiting occurs.

IMESSAGE_LEVEL=0 : No output messages

IMESSAGE_LEVEL=1 : Moderate output Messages

IMESSAGE_LEVEL=2 : Full Output Message (recommended)
IMESSAGE_LEVEL: Select Level of output messages when time-step limiting or other limiting occurs.

IMESSAGE_LEVEL=0 : No output messages

IMESSAGE_LEVEL=1 : Moderate output Messages

IMESSAGE_LEVEL=2 : Full Output Message (recommended)
IN_PARALLEL: Enable/Disable Parallel Input for Reading Grid Files. Parallel input is implemented as all processors simultaneously reading from a common file system. On some systems, this causes the (Network File System) NFS to overload. To avoid this, IN_PARALLEL=0 instructs each processor to read from the common file system in sequence, one at a time, while the other processors wait their turn. This is slower but more robust for many NFS systems.

IN_PARALLEL = 0 : Sequential input from common file system using I/O from one processor at a time.

IN_PARALLEL = 1 : Parallel input from common file system using I/O from all processors simultaneously.
INRES_PARALLEL: Enable/Disable Parallel Input for Reading Restart Files. Parallel input is implemented as all processors simultaneously reading from a common file system. On some systems, this causes the (Network File System) NFS to overload. To avoid this, INRES_PARALLEL=0 instructs each processor to read from the common file system in sequence, one at a time, while the other processors wait their turn. This is slower but more robust for many NFS systems.

INRES_PARALLEL = 0 : Sequential input from common file system using I/O from one processor at a time.

INRES_PARALLEL = 1 : Parallel input from common file system using I/O from all processors simultaneously.
IOUT_PARALLEL: Enable/Disable Parallel Output for Writing out Restart Files. Parallel output is implemented as all processors simultaneously writing to a common file system. On some systems, this causes the (Network File System) NFS to overload. To avoid this, IOUT_PARALLEL=0 instructs each processor to write to the common file system in sequence, one at a time, while the other processors wait their turn. This is slower but more robust for many NFS systems.

IOUT_PARALLEL = 0 : Sequential output to common file system using I/O from one processor at a time.

IOUT_PARALLEL = 1 : Parallel output to common file system using I/O from all processors simultaneously.
IPC_LOW_MACH: Select or Omit Low Mach Number Preconditioning

IPC_LOW_MACH = 0 : No Preconditioning, use regular base scheme

IPC_LOW_MACH = 1 : Standard Low-Mach Preconditioning (use preset value BETA_MIN=3Mach**2)

IPC_LOW_MACH = -1 : Custome Low-Mach Preconditioning: Also Set value for : BETA_MIN
IPC_RAMP: Ability to Ramp in the Low_Mach number preconditioning for increased robustness.

IPC_RAMP = 0 : No ramping, apply full preconditioning from first iteration.

IPC_RAMP = 100 : Ramp in low-Mach number preconditioning over first 100 cycles and apply full preconditioning thereafter.
IRESTART_AUX_TYPE: Determines content of restart.aux.out auxiliary restart file (or directory with partitioned data)

IRESTART_AUX_TYPE = 0 : No auxiliary restart file

IRESTART_AUX_TYPE = 1 : Cf,Y+,on surface, eddy viscosity elsewhere

IRESTART_AUX_TYPE = 2 : Oil Flow surface velocities(1st 3 entries), Cf, Y+ on surface

IRESTART_AUX_TYPE = 3 : Transition Mask, last iteration number for limited time step at given grid points

IRESTART_AUX_TYPE = 5 : Pressure Coefficient and Skin Friction vector (Cp,CF,CFx,CFy,CFz)
ISAFE_LEVEL: Controls level of checking for unphysical states in time stepping procedure.

ISAFE_LEVEL = 0 : No Checks for Negative Density/Pressures (slightly faster execution)

ISAFE_LEVEL = 1 : Checks for Negative Density/Pressures (reduces time step accordingly)

ISAFE_LEVEL = 2 : After above fixes, if negative values still occur omit update at these points

ISAFE_LEVEL = 3 : Perturb Energy Values for STUCK pts (where dt -> 0.0)

ISAFE_LEVEL = 4 : More Robust itbc(but O(h) flux through walls)

(requires IFINE_BNDY_DISSIP .ne. -1) Recommended Value: ISAFE_LEVEL = 2
ITE_IBL: Trailing edge treatment for IBL code

ITE_IBL = 0 : No special Trailing Edge Treatment

ITE_IBL = 1 : Extrapolate Velocities at x>xclip_ibl (%chord)

ITE_IBL = -1 : Smooth Velocities at x>xclip_ibl (%chord)

ITE_IBL = 2 : Extrapolate Velocities at last NTE_IBL Points

ITE_IBL = -2 : Smooth Velocities at last NTE_IBL Points

ITE_IBL = 3 : IBL not computed on last NTE_IBL Points and blowing velocities extrapolated
NCYC_CHECKPT: Write out checkpoint file aftyer NCYC_CHECKPT cycles. Alternate checkpt files are written out: checkpt.1 every odd multiple of NCYC_CHECKPT and checkpt.2 every even multiple of NCYC_CHECKPT. Rather than overwriting the latest checkpt file, this approach avoids possible loss of latest checkpt file if failure occurs during checkpt write.

NCYC_CHECKPT < 0 or = 0 : Omit Checkpoint files
NFREEZE_LIM: Controls limiter freezing. Freeze limiters (IFREEZE_LIM=1) or begin freezing limiters (IFREEZE_LIM=2,3) after NFREEZE_LIM cycles. Only active for IFREEZE_LIM > 0
NTE_IBL: Number of pts when ITE_IBL = 2,3
PMIN: Minimum pressure value before limiting occurs.
PMIN_BCWALL_IBL: Limit on Minimum Wall Pressure at blowing points
PMIN_MG: Minimum pressure value for omitting multigrid updates
RHOMIN: Minimum density value before limiting occurs.
RHOMIN_MG: Minimum densuty value for omitting multigrid updates.
SIGMA DES: parameter for the scaling of the artificial damping.

0 < SIGMA_DES < 1 is for the fixed scaling value.
SIGMA_DES = 1 will run the hybrid scheme(equation 6). However, the parameters in equation (6), (7), (8) and (9) need to further investigation to ensue the robustness of the scheme. This option is not recommended right now.
VNMAX_IBL: Maximum Limit on Blowing Velocities
XCLIP_IBL: Fraction of chord after which extrapolation is done when ITE_IBL = 1 (i.e. XCLIP_IBL = 0.99)

Solver output

Running NSU3D produces various output files.


Standard Output: NSU3D creates an information and history file which is written to standard out. This will appear as scrolling screen output if NSU3D is run interactively. In batch mode, this will appear in the batch log output file. Alternatively, the standard output can be redirected to a file, denoted as nsu3d.out in the preceeding examples. The UNIX tee function can be used as shown in the preceeding section to simultaneously obtain the standard output log and to save this information to a file.


Working Directory:
All other NSU3D generated output is written into a working directory which is created by NSU3D upon startup if it does not already exist. In the distributed version of the code, this directory is called ./WRK and therefore the additional files are located in the WRK directory which is created under the current directory from which NSU3D is invoked. This directory name is configurable at compile time and is set in the init_io.f routine. (link hereSource Code Configurable Defaults) If the named working directory already exists, NSU3D will overwrite any existing files or directories with similar named files or directories created during the run. However, other contents of the named working directory are not removed prior to the NSU3D run.

Different outputs may be generated for different types of runs. Therefore, a description of the outputs for various run types is given below.

After the flow solution run is finished, the following files will be written into the WRK directory:

input.nsu3d: Copy of input file for this run
restart.out: Directory containing solution values and history logs.
restart.aux.out: Directory containing auxiliary solution values.
endstat1: File certifying successful completion of restart.out output write.
endstat2: File certifying successful completion of restart.aux.out output write.
STOP_RUN: File to enable premature termination of job.
input.postnsu3d: File for post processing.
input.turb.postnsu3d: File for post processing.


Additional files may be written for specific types of cases:

massflobc.#: History of massflow ratio for cases with engine inlet massflow ratio boundary condition. Here # identifies the boundary condition instance associated with this massflow boundary condition.
restart.#: Restart files as above, but output as checkpointing mechanism periodically every NCYC_CHECKPTNCYC_CHECKPT: Write out checkpoint file after NCYC_CHECKPT cycles. Alternate checkpt files are written out: checkpt.1 every odd multiple of NCYC_CHECKPT and checkpt.2 every even multiple of NCYC_CHECKPT. Rather than overwriting the latest checkpt file, this approach avoids possible loss of latest checkpt file if failure occurs during checkpt write.

NCYC_CHECKPT < 0 or = 0 : Omit Checkpoint files iterations or time steps based on the value of the optional parameter NCYC_CHECKPTNCYC_CHECKPT: Write out checkpoint file after NCYC_CHECKPT cycles. Alternate checkpt files are written out: checkpt.1 every odd multiple of NCYC_CHECKPT and checkpt.2 every even multiple of NCYC_CHECKPT. Rather than overwriting the latest checkpt file, this approach avoids possible loss of latest checkpt file if failure occurs during checkpt write.

NCYC_CHECKPT < 0 or = 0 : Omit Checkpoint files.

solution.#: Solution files output periodically every NTIME_STEP_OUT time steps based on the value of the required parameter NTIME_STEP_OUT for time dependent runs only. solution.# files contain sufficient information for visualizing the solution but cannot be used to restart a NSU3D run. A more complete description of these files is given below:

input.nsu3d: NSU3D creates a copy of the input file used for the current run in the WRK directory. This can be useful for logging the specifics of the run and for enabling the recreation of the run.

restart.out: This directory contains the bulk of the output from the NSU3D run. It contains the final solution values in partitioned format and all additional information required to seamlessly restart a solution. This may include turbulence variables, iblank information, modified grid coordinates for moving grid cases, and multiple previous time levels of the solution of required for time dependent problem restarts.

A description of restarting procedures for NSU3D can be found hereSolver Restart Facility. To postprocess the current solution, the program postnsu3d must be used to reassemble the partitioned solution files. This can be done using the input.postnsu3d file generated by the NSU3D run and located in the WRK directory. A description of post processing procedures can be found herePost Processing. restart.out also contains all the history log files for the run, including any history from previous restart runs. This includes a history of residuals, force coefficients, and other quantities. A description of all history files is given hereAdditional History and Log Info Files

restart.aux.out: This directory contains auxiliary solution values which may be used for visualizing turbulence quantities, producing surface oil flow plots, or other such undertakings. The specific quantities which are written to the restart.aux.out files can be specified through the IRESTART_AUX_TYPECLICK for more about IRESTART_AUX_TYPE optional parameter.

endstat1: This file is written to certify that the restart.out files have all been written successfully. View this file to confirm restart.out is complete or to diagnose problems with corrupted restart.out files.

endstat2: This file is written to certify that the restart.aux.out files have all been written successfully. View this file to confirm restart.aux.out is complete or to diagnose problems with corrupted restart.aux.out files.

STOP_RUN: This file is written to the work directory when NSU3D begins the run. It contains a number of iterations which is larger than the total number of requested iterations in the NSU3D input file. For time-dependent cases, this number refers to the number of time steps rather than iterations (or sub-iterations). To force NSU3D to terminate gracefully, edit this file and reduce the number of iterations. NSU3D checks this file periodically and will terminate execution (and output all expected solution and history files) once the specified number of iterations/time-steps in STOP_RUN has been reached or exceeded. The STOP_RUN file can be edited at any time during the NSU3D run.

input.postnsu3d: NSU3D automatically creates an input parameter file for postnsu3d, the postprocessor used to assemble partitioned solution files into a contiguous re-ordered solution file for use in tecform. The current restart.out directory, the partitioned grid file directory used in the NSU3D run (required for partition/ordering information) and the number of partitions are all specified automatically in this file by NSU3D, enabling scripting of the preprocessing operation.

input.turb.postnsu3d: This file serves the same purpose as the input.postnsu3d file but applies to the restart.aux.out auxiliary solution values.

Restart File Output and Control

STEADY STATE RUNS: For steady-state runs, restart files can be written periodically at selected intervals for checkpointing purposes. The optional parameter NCYC_CHECKPTNCYC_CHECKPT: Write out checkpoint file aftyer NCYC_CHECKPT cycles. Alternate checkpt files are written out: checkpt.1 every odd multiple of NCYC_CHECKPT and checkpt.2 every even multiple of NCYC_CHECKPT. Rather than overwriting the latest checkpt file, this approach avoids possible loss of latest checkpt file if failure occurs during checkpt write. NCYC_CHECKPT < 0 or = 0 : Omit Checkpoint files is used to control this feature.

When a non-zero value of NCYC_CHECKPT is specified in the input parameter file, a solution restart directory named restart.# is written out every NCYC_CHECKPT iterations. The # in the restart.# directory name consists of a 6 digit integer referring to the iteration number of this run at which the solution restart directory was created. This number includes all iterations run on coarser grid levels using the MMESH parameter in the current run, and thus may not correspond to the fine grid iteration number. Additionally, the restart checkpointing feature is only active on the finest/last grid of the MMESH sequence.

Using the NCYC_CHECKPT parameter, multiple restart.# directories will be written out during a typical run, and these will be written to the current working directory (./WRK). At the end of the run, a final restart.out and restart.aux.out directories will also be written to the working directory (./WRK). Note that the auxiliary variables found in restart.aux.out are only written out at the end of the run and are not affected by the value of NCYC_CHECKPT.

UNSTEADY/TIME-DEPENDENT RUNS: For unsteady or time-dependent runs, there are two possible checkpointing output directories. The first is the restart.# directories, which are controlled by the NCYC_CHECKPT parameter set in the input parameter file for NSU3D. However, in this case, the value of NCYC_CHECKPT refers to the time step number and the restart.# directories will be output every NCYC_CHECKPT time steps (as opposed to iterations or sub-iterations), and the # number on the directory name refers to the time step value. At the end of the entire time-dependent run, a final restart.out and restart.aux.out directories are written to the current working directory (./WRK). Note that the auxiliary variables found in restart.aux.out are only written out at the end of the run and are not affected by the value of NCYC_CHECKPT.

An additional set of solution directories can also be written out during a time-dependent run. These directories, labeled solution.# consist of a reduced set of information which cannot be used to restart the solution process, but which is sufficient for visualizing the solution using standard visualization tools, and for making animations. These files may be useful for constructing time dependent animations where many solution instances are required because they typically contain less data than the complete restart directories. Output of the solution.# directories is controlled by the NTIME_STEP_OUT parameter specified in the input parameter list for NSU3D, which is a mandatory parameter to be included for time dependent runs. During a time dependent run, solution.# directories will be written to the current working directory (./WRK) every NTIME_STEP_OUT time steps, and the # character represents a six digit value of the time step at which the solution file was created. A final solution.# file is not necessarily produced at the end of the unsteady run. In order to guarantee the output of a solution.# directory at the final time step the total number of time steps should be evenly divisible by NTIME_STEP_OUT.

Values of NTIME_STEP_OUT equal to 0 or negative values will omit all solution.# output.

NOTE: Solution files do not contain any history information or logs.

Additional History and Log Information Files

NSU3D also produces a set of history or log files which record the time run or convergence histories of various quantities including residuals and force coefficients. At the end of the run, these files are written to the restart.out directory. These are also written to any intermediate restart.# checkpoint directories created during the run. There are two variants for each file: the history based only on the current run (*.out file) and the history based on the current run appended to all previous runs used to restart the current run (*.restart.out file). In the case where no restart is used for the current run, the *.out and *.restart.out files will be identical.

All history files are written in a format which is directly readable in TECPLOT. Header lines with run-time information are preceded by a # character which results in them being ignored by TECPLOT.

The set of files includes:

Residual histories and one representative force coefficient history (CL or CZ)
rplot.out
rplot.restart.out


History of wind axis force and moment coefficients (based on no slip boundary condition (RANS) or slip boundary condition (EULER):
force1.out
force1.restart.out


History of wind axis force coefficients broken down by pressure and friction components (based on no slip boundary condition (RANS) or slip boundary condition (EULER):
force2.out
force2.restart.out


History of grid/body axis force and moment coefficients (based on no slip boundary condition (RANS) or slip boundary condition (EULER):
force3.out
force3.restart.out
Note: For ISPAN=1 cases, force3.* and force1.* values will be identical.


History of wind axis force coefficients broken down by geometry components (as specified in *.comp file used in mesh preprocessing):
comp_force.wind.out
comp_force.wind.restart.out


History of grid/body axis force coefficients broken down by geometry components (as specified in *.comp file used in mesh preprocessing):
comp_force.xyz.out
comp_force.xyz.restart.out


History of various flow quantities based on iteration count:
history_ires.out
history_ires.restart.out


History of various flow quantities based on time step count:
history_istep.out
history_istep.restart.out


NOTE: These files will be empty for steady-state runs. For time-dependent runs, the *ires.out files record the histories at each subiteration throughout the entire run, while the *istep files record the histories only at each time step.

A total of 64 quantities are monitored in the history* files.
A description of these quantities can be found here (link coming soon).

-------------------------------------------------------------
Mode of operation:
The rplot, force and history files are constructed and logged as follows during an NSU3D run:
  • The values for the current run are temporarily stored in memory as they accumulate and then periodically output to the *.out files. During the run in this manner, the *.out files are located in the ./WRK directory (as opposed to the ./WRK/restart.out directory). These files can be used to plot convergence history while the solver is still running, in order to assess whether the solver should be terminated (using the STOP_RUN file (link)).
  • The frequency for flushing the output to the *.out files is controlled by hardwired parameters (kflush_ires, kflush_istep, kflush_comp_force) which are set in the routine init_io.f. If periodic flushing is not desired, it can be turned off by setting these values to be vary large. However, the dimension of the buffers used to store this info within NSU3D must also be set large enough. Flushing will automatically occur when the size of the buffers exceeds the memory space allocated to these buffers. Buffer sizes are set through parameters mcount_ires, mcount_istep, mcount_comp_force which are set in common_dynamic.f. Flushing is also triggered when ever an intermediate runtime restart file (i.e. checkpointing in response to NCYC_CHECKPT parameter) is written out.
  • At the end of the NSU3D run, the *.out files are moved from the working directory (./WRK) to the restart.out directory. At the same time, if the current run was restarted from a previous solution, the history *.restart.out files contained in the directory used to restart the current solution are copied to the current restart.out directory and the history information from the current run is appended to the restart history information to produce the *.restart.out history files for the current run. In this manner, the *.restart.out files contain all the history including all previous restart runs used to produce the final solution. All history files are thus associated with the current solution values contained in the current restart.out directory and are therefore archived in that directory. This permits their use in future restart runs based on this solution. Additionally, all history files are removed from the current working directory (./WRK) since they are now archived in the restart.out subdirectory. For intermediate runtime restart outputs (i.e. solution checkpointing controlled by the NCYC_CHECKPT parameter), the requisite history files and restart history files are written to the intermediate restart directories labeled restart.#, where # denotes the cycle number of the run.
  • The names for all output directories and files are set in the source code in init_io.f These can be modified at compile time if alternate file names are desired. A list of source code configurable settings can be found here (link).

Monitored Values in history files

The following are descriptions of all of the possible outputs contained in the NSU3D history files:

"NCYC"
Cycle number (iterations or subiteration for time-dependent runs)
based on the current run alone.

"NCYC_TOTAL"
Cycle number (iterations or subiteration for time-dependent runs)
based on all previous restart runs.

"NSTEP"
Time step number based on the current run alone.

"NSTEP_TOTAL"
Time step number based on all restart runs.

"ICYC"
Current counter for loop executed in NSU3D
(can be either time step loop, or subiteration loop)

"TIME"
Current time value (number of time steps multiplied by time step size)
for this run (initialized to 0.0 at beginning of current run).

"TIME_TOTAL"
Total time value including all restart time values.

"CPUTIME"
Current cpu time consumed by this run.

"CPUTIME_TOTAL"
Cumulative cpu time consumed by this run
and all previous restart runs.

"DELTA_RHO"
RMS Density delta (corrections or changes) generated at this iteration.

"HRMS"
RMS value of H-H0 where H0 is freestream enthalpy.
For enthalpy preserving scheme this should go to zero.
However, for RANS cases this does not vanish.

"DELTA_TURB1"
RMS Turbulence delta (corrections or changes) for first turbulence equation generated at this iteration.
Measure of convergence of first turbulence equation.

"DELTA_TURB2"
RMS Turbulence delta (corrections or changes) for second turbulence equation generated at this iteration.
Measure of convergence of second turbulence equation.

"NSUPERSONIC_PTS"
Number of supersonic points in flow field

"MAX_DELTA_RHO"
Maximum value of DELTA_RHO in flow field.

"XMAX_DELTA_RHO"
X-coordinate location of MAX_DELTA_RHO in flow field.

"YMAX_DELTA_RHO"
Y-coordinate location of MAX_DELTA_RHO in flow field.

"ZMAX_DELTA_RHO"
Z-coordinate location of MAX_DELTA_RHO in flow field.

"MAX_EDDY_VISC"
Maximum value of eddy viscosity in flow field.

"NWALLPTS_TEMP_RELAX"
Number of wall points being relaxed for specified wall temperature cases.

"RESID_RHO_TIMEACC"
RMS average of time-dependent residual
computed as (Vol*w^(n+1) - Vol*w^n)/DT + R(w)) -->0 at convergence.
for density equation.

"RESID_RHO_TIMEACC/VOL"
RMS average of time-dependent residual RESID_RHO_TIMEACC divided by volume

"DRHO/DT*VOL"
RMS average of Time derivative of density multiplied by cell volume for unsteady runs.
(Computed as (rho^(n+1) - rho^n)/DT)*vol

"DRHO/DT"
RMS average of Time derivative of density for unsteady runs.
(Computed as (rho^(n+1) - rho^n)/DT)

"RESID_TURB1_TIMEACC"
RMS average of time-dependent residual
computed as (Vol*w^(n+1) - Vol*w^n)/DT + R(w)) -->0 at convergence.
for first turbulence equation.

"RESID_TURB1_TIMEACC/VOL"
RMS average of time-dependent residual RESID_TURB1_TIMEACC divided by volume

"DTURB1/DT*VOL"
RMS average of Time derivative of first turbulence equation multiplied by cell volume for unsteady runs.
(Computed as (rho^(n+1) - rho^n)/DT)*vol

"DTURB1/DT"
RMS average of Time derivative of first turbulence equation for unsteady runs.
(Computed as (rho^(n+1) - rho^n)/DT)

"CX"
Force coefficient in X-coordinate direction (based on boundary condition type INSBC

"CY"
Force coefficient in Y-coordinate direction (based on boundary condition type INSBC

"CZ"
Force coefficient in Y-coordinate direction (based on boundary condition type INSBC

"CXP"
Pressure Force coefficient in X-coordinate direction (based on boundary condition type INSBC

"CYP"
Pressure Force coefficient in Y-coordinate direction (based on boundary condition type INSBC

"CZP"
Pressure Force coefficient in Z-coordinate direction (based on boundary condition type INSBC

"CXF"
Friction Force coefficient in X-coordinate direction (based on boundary condition type INSBC

"CYF"
Friction Force coefficient in Y-coordinate direction (based on boundary condition type INSBC

"CZF"
Friction Force coefficient in Z-coordinate direction (based on boundary condition type INSBC

"CMX"
Moment coefficient about X-axis (based on boundary condition type INSBC

"CMY"
Moment coefficient about Y-axis (based on boundary condition type INSBC

"CMZ"
Moment coefficient about Z-axis (based on boundary condition type INSBC

"CMXP"
Pressure Moment coefficient about X-axis (based on boundary condition type INSBC

"CMYP"
Pressure Moment coefficient about Y-axis (based on boundary condition type INSBC

"CMZP"
Pressure Moment coefficient about Z-axis (based on boundary condition type INSBC

"CMXF"
Friction Moment coefficient about X-axis (based on boundary condition type INSBC

"CMYF"
Friction Moment coefficient about Y-axis (based on boundary condition type INSBC

"CMZF"
Friction Moment coefficient about Z-axis (based on boundary condition type INSBC

"CL"
Lift coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CD"
Drag coefficient (defined by ISPAN (link) wind axes) (based on boundary condition type INSBC

"CSIDE"
Side Force coefficient (defined by ISPAN (link) wind axes) (based on boundary condition type INSBC

"CLP"
Pressure Lift coefficient (defined by ISPAN ) wind axes) (based on boundary condition type INSBC

"CDP"
Pressure Drag coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CSIDEP"
Pressure Side Force coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CLF"
Friction Lift coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CDF"
Friction Drag coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CSIDEF"
Friction Side Force coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CM_PITCH"
Pitching Moment coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CM_YAW"
Yaw Moment coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CM_ROLL"
Roll Moment coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CM_PITCHP"
Pressure Pitching Moment coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CM_YAWP"
Pressure Yaw Moment coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CM_ROLLP"
Pressure Roll Moment coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CM_PITCHF"
Friction Pitching Moment coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CM_YAWF"
Friction Yaw Moment coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CM_ROLLF"
Friction Roll Moment coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

Source Code Configurable Defaults

The following routines contain hardwired defaults which can be modified at compile time by the user. Scroll down for a list and description of the configurable parameters.

common_dynamic.f
init_io.f
set_lim_values.f

------------------------------------------------------------
common_dynamic.f
This file contains some hardwired dimensions for NSU3D memory allocation. Although the current values should be best for most applications, occasionally these may be modified for special cases:

parameter(mmesh=10) ---> Maximum number of meshes in multigrid
parameter(mpart_files=2048) ---> Maximum number of partitions in grid file
parameter(mpart_loc=512) ---> Maximum number of partitions per processor
parameter(mbctyp=40) ---> Maximum number of boundary condition instances
parameter(mstage=10) ---> Maximum number of Runge-Kutta stages
parameter(mbclist=8) ---> Maximum number of boundary condition types
parameter(mcwords=25) ---> Maximum number of words in mpi buffers (fixed)
parameter(mcount_ires=1000) ---> Maximum buffer size for iteration based histories
parameter(mcount_istep=1000) ---> Maximum buffer size for time-step based histories
parameter(mcount_comp_force=1000) --> Maximum buffer size for storing force histories

Note that the buffer sizes for ires, istep and component forces contain numerous entries (values to be stored) per iteration
of time step, therefore these should not be set indefinitely large since this will require large memory resources.


------------------------------------------------------------------------------------------
init_io.f

This routine contains parameters and file names for I/O related functions in NSU3D.
Configurable parameters:

IRESTART_LOG = 1 --> Log Restart Names by Local(=0) or Global(=1) step/cycle number When intermediate restart/solution files are output, these are given the name restart.# where # corresponds to the time step or iteration number (depending on whether this is a time dependent or steady-state run, respectively). The numbering scheme can be based on the local time step or iteration number, meaning the number based on the current run alone (starting at 1 in the current run) or the global number meaning the number including all restart time steps/iterations which proceeded the current run in cases where the current run has been restarted from a previous solution.

IO_SYSTEMCALL = 2 !Possible Values: 0,1,2
NSU3D performs various system calls principally for I/O functions, such as creating directories and copying/appending history files. This parameter either suppresses any information concerning system calls (=0) or writes out information to standard out concerning these system calls (=1,=2). When set =2, each message will be prefaced by ">>Syscall: " in order to more clearly identify these messages.

IO_CLEAN = 1 !Possible Values: 0=No Clean, 1=Clean
When set =1, this causes all history files (i.e. rplot.out, rplot.history.out etc..) to be removed from the current working directory at the end of the run, since these are logged in the restart.out subdirectory. Files are left in both restart.out and the current working directory for IO_CLEAN=0. Files with names which do not correspond to any of these history files are not removed from the working directory in either case.

IO_SLEEP = 2
On various hardware platforms there can be a delay in the NSF file system cache updating procedure which may cause NSU3D to fail when trying to write to a directory which was created by one processor, but which has not been cached in NFS and is thus still not visible to all other processors. This parameter causes NSU3D to wait for IO_SLEEP seconds after creating the directory to allow NFS to catch up and cache the directory. If NSU3D fails on I/O operations try increasing this value. The value will be hardware dependent.

kflush_ires = 300
History information based on iteration count is periodically flushed to the history files written in the current working directory (and ultimately copied to the restart.out subdirectory). This parameter determines the flushing frequency (i.e. flush very 300 iterations).

kflush_istep = 2
History information based on time-step count (for time-dependent simulations) is periodically flushed to the history files written in the current working directory (and ultimately copied to the restart.out subdirectory). This parameter determines the flushing frequency (i.e. flush very 2 time steps). Since a time step contains numerous subiterations, the time-step flushing frequency is typically smaller than the iteration-based flushing frequency, thus allowing a suitable wall-clock update frequency of the history files.

kflush_comp_force = 300
History information for component based forces is periodically flushed to the history files written in the current working directory (and ultimately copied to the restart.out subdirectory). This parameter determines the flushing frequency (i.e. flush very 300 iterations).


OUTPUT_WRK_DIRNAME1 = './WRK'
The working directory name is set using this parameter. Default is ./WRK which creates a working directory WRK under the current directory. All history and restart files will be written to this directory. This directory is also overwritten for new runs if it exists previously.

RESTART_DIRNAME1 = 'restart.out'
RESTART_DIRNAME2 = 'restart.aux.out'
SOLUTION_DIRNAME1 = 'solution.out'
SPARSE_DIRNAME1 = 'sparse.out'


The names for the directories for restart and solution files are set here. All other file names are also set in init_io.f Descriptions of these files are self explanatory or are covered elsewhere in the documentation.
Other file names:
FILENAME0(11) = 'rplot.out'
FILENAME0(12) = 'rplot.restart.out'
FILENAME0(13) = 'force1.out'
FILENAME0(14) = 'force2.out'
FILENAME0(15) = 'force3.out'
FILENAME0(16) = 'force1.restart.out'
FILENAME0(17) = 'force2.restart.out'
FILENAME0(18) = 'force3.restart.out'
FILENAME0(19) = 'history_ires.out'
FILENAME0(20) = 'history_ires.restart.out'
FILENAME0(21) = 'history_istep.out'
FILENAME0(22) = 'history_istep.restart.out'
FILENAME0(23) = 'comp_force.xyz.out'
FILENAME0(24) = 'comp_force.wind.out'
FILENAME0(25) = 'comp_force.xyz.restart.out'
FILENAME0(26) = 'comp_force.wind.restart.out'
FILENAME0(27) = 'ibl.stations.tec.proc.'
FILENAME0(28) = 'ibl.log.proc.'
FILENAME0(29) = 'massflobc.'

FILENAME0(51) = 'input.postnsu3d'
FILENAME0(52) = 'input.turb.postnsu3d'
FILENAME0(53) = 'endstat1'
FILENAME0(54) = 'endstat2'

--Actuator Disk Files (optional)
FILENAME0(61) = 'smemrd_nodes.nam'
FILENAME0(62) = 'smemrd_nodes.p3d'
FILENAME0(63) = 'smemrd_grid.p3d'
FILENAME0(64) = 'smemrd_q.p3d'
FILENAME0(65) = 'smemrd_force.p3d'
FILENAME0(66) = 'smemrd_data.nam'
FILENAME0(67) = 'smemrd_data_'
FILENAME0(68) = 'smemrd_flow.nam'
FILENAME0(69) = 'smemrd_flow_'

-------------------------------------------------------------------------------------------
set_lim_values.f

This routine sets the default values for all optional parameters. Because all optional parameters can be specified in the nsu3d input file, a description of these parameters is contained here.
Please refer to this description for each parameter.

Solver Restart Facility

NSU3D provides the capability for restarting both steady-state and time-dependent runs using the restart.out or restart.# directories generated by previous NSU3D runs. Note that the solution.# directories cannot be used to restart NSU3Druns since they do not contain the required information for restarting runs. (They can however be used to visualize the solution).

STEADY-STATE RUNS:
To restart a steady state run, specify the restart directory name in the input parameter file under the RESTART FILE heading (link).
The restart facility is then activated using the parameters RESTARTF and RESTARTT (link).

RESTARTF = 0.0 bypasses the restart function and flow variables are initialize to freestream.
RESTARTF = 1.0 results in the flow field being initialized by the values in the specified restart file/directory.
RESTARTT = 1.0 is used to specify the initial turbulence values from the restart file/directory (only when RESTARTF=1.0)
RESTARTT = 0.0 is used to set initial turbulence values as freestream values.
RNTCYC is no longer functional and should be set = 0.0

To produce an exact restart, use RESTARTF = 1.0 and RESTARTT = 1.0
To restart only flow field values and use freestream turbulence values use RESTARTF = 1.0 and RESTARTT = 0.0

UNSTEADY or TIME-DEPENDENT RUNS:
The restart procedure for time dependent runs is similar to that described above for steady-state runs. However, for time-dependent runs using ITACC=2 or 3 (link) (BDF2 or BDF3) additional time level information is required to produce an exact restart with no loss in time accuracy. If the specified restart file was produced from a run using ITACC=2 or 3, then the necessary information for a fully time accurate restart run is available in the restart file and the restart will be exact.

However, if the restart file was generated from a previous steady state run, or from a run with an ITACC value smaller than the current ITACC value, then the restart file does not contain sufficient information to enable an exact restart. In this case, a restart using a lower time accuracy on the first time step can be achieved by specifying: RESTARTF = -1.0 For example, if restarting from a steady-state solution but specifying ITACC=2.0 RESTARTF= -1.0 must be used. In this case, the first time step will consist of a BDF1 time step (i.e. first order accuracy in time, equivalent to ITACC=1.0) and all subsequent time steps will revert to BDF2 (ITACC=2) time accuracy.

In the case where the restart file was generated using the same ITACC value as currently set in the new run, but a different time step value was used, RESTARTF = -1.0 must also be specified, resulting in the first time step being computed using BDF1.

The type of time discretization at each time step (BDF1, BDF2, BDF3) is output to standard out, providing a means for checking the operation of the restart facility on the first several time steps.