- Getting Started
- Preprocessing
- Running NSU3D
- Launching the Solver
- Solver Input File
- Optional Parameters
- Solver Output
- Restart File Output and Control
- Additional History and Log Info Files
- Monitored Values in History Files
- Source Code Configurable Defaults
- Solver Restart Facility
- Post Processing
- Utility Code Reference
- Solver File Formats
- NSU3D Tutorial

- mpirun -np /nsu3d input.nsu3d

- mpirun -np /nsu3d input.nsu3d > nsu3d.out
- mpirun -np /nsu3d input.nsu3d |tee nsu3d.out

This number must be consistent with the number of partitions in the grid file being read in (and specified in the input.nsu3d parameter input file).

The next argument to mpirun or mpiexec is the executable name for nsu3d, which may require a complete pathname to be visible from the location where mpirun is being invoked.

NSU3D is typically compiled with the mpi libraries and should be run as shown above. However, it is possible to compile NSU3D without the mpi libraries (see example) in which case it should be run as a regular executable i.e.:

A sample input list file for NSU3D is shown below. This input list is separated into various regions which deal with the general run parameters (Lines 1 to 17), the coarse grid parameters (Lines 19 to 23), the turbulence model parameters (Lines 25 to 33), the flow conditions (Line 36), the force coefficient definitions (Line 40), and the input file name (Line 43). The numerical values given in the sample file are in most cases optimal and should be used as the baseline values.

Input File Description

- Line 1: contains a title for the particular case.
- Line 3: controls the restart function. RESTARTF = 1.0 instructs the solver to read the initial flow field from the restart directory listed on Line 5. If RESTARTF = 0.0, no restart directory name is required and the flow is initialized with freestream values. If RESTARTF = 1.0 and RESTARTT = 0.0, then the flow solution is restarted from the restart directory, but the turbulence values are reinitialized to zero and recomputed from scratch. If RESTARTF = 1.0 and RESTARTT = 1.0, then both flow and turbulence values are read in from the restart directory. RNTCYC denotes the number of solution cycles to be performed on the turbulence model (with the flow field frozen) just after the restart directory has been read in. This option can be used for example to pre-converge the turbulence model, particularly if a flow field is read in from the restart directory, but the turbulence values are not read in, possible because an alternate turbulence model has been selected.
- Line 5: The name of the restart directory is specified on Line 5. If RESTARTF = 0.0, this entry is ignored.
- Line 7: MMESH is use for mesh sequencing, i.e. running on various meshes of the multigrid sequence. This parameter actually determines the number of lines of type Line 9 that are to follow. It is to be used in conjunction with the NMESH (number of multigrid meshes) and MESHLEVEL (identifies the mesh on which solution is computed) parameters. For example, a typical Full Multigrid (FMG) Mesh Sequencing algorithm would solve the flow on the coarsest mesh (MESHLEVEL = 4.0, for the case where four grid levels are available), using a single mesh in the multigrid sequence (NMESH = 1.0), and then interpolate the solution to the next finer mesh (MESHLEVEL = 3.0), solve the flow on this mesh using two meshes in the sequence (NMESH = 2.0), and then continue on each finer grid in this manner until the finest grid is reached (MESHLEVEL = 1.0). Each solution on a given grid level involves an entry of the type on Line 9, and the total of these entries must correspond to the MMESH value set here. In actual fact, the MMESH facility is more general than simply one which offers the possibility of performing mesh sequencing. Any sequence of mesh solutions can be prescribed. For example, a partially converged solution on the finest mesh can first be achieved using the single (non-multigrid) algorithm, and then the multigrid algorithm on the finest mesh can be invoked afterwards by using MMESH = 2.0, with the first line containing NMESH = 1.0, MESHLEVEL = 1.0 and the second line containing NMESH = 4.0, MESHLEVEL = 1.0. Additionally, the value MESHLEVEL = -1.0 enables the solution of the first order discretization on the finest grid level. This may be useful for pre-converging cases which experience start-up problems, thus increasing overall robustness of the solver. The example input file above shows an initial phase of full multigrid mesh sequencing, followed by a first order accurate multigrid solution phase on the finest mesh, followed by a single grid mesh solution phase on the finest grid, followed by the second order accurate multigrid solution on the finest grid, which yields the final result. A good strategy for increasing robustness at startup is to perform 10 or 20 single grid cycles or first-order accurate multigrid cycles on the finest grid, followed by the second order accurate multigrid solution for several hundred cycles. Full multigrid mesh sequencing in general does not provide substantial convergence acceleration over the entire solution process, and is not often invoked. It can however be used to diagnose a problem with one of the agglomerated coarse levels. Thus, in general, a value MMESH = 2.0 is prescribed, while using only lines 9.5 and 9.6 (or 9.4 and 9.6). NTHREAD denotes the number of OpenMP threads to be used during parallel execution. For a hybrid MPI-OpenMP run, this refers to the number of threads running under each MPI process. On some systems it may also be necessary to set the OMP_NUM_THREADS environment variable to enable the requested number of threads to be employed. If for such a reason NTHREAD threads cannot be spawned, nsu3d will terminate with a message to that effect.
- Line 9: This line should be replicated (with changes) MMESH times. Each instance of this line refers to the solution on a particular mesh of the multigrid sequence, and defines the parameters required for that solution process. NCYC specifies the number of (multigrid) cycles to be executed. The maximum eddy viscosity computed throughout the entire flow field is printed out every NPRNT cycles. N MESH specifies the number of multigrid levels (including the fine grid). The minimum value is 1, which reproduces a single grid algorithm, and the maximum value is NLEVELS + 1, where NLEVELS is the value specified in the AMG3D input list when constructing the *.amg file for this run. MESHLEVEL specifies the mesh in the multigrid sequence on which the solution is to be obtained. This is used in grid sequencing or preconditioning the solution by performing single fine grid iterations and/or first-order accurate fine grid iterations. MESHLEVEL = 1.0 always refers to the finest grid of the sequence. MESHLEVEL = -1.0 also refers to the finest grid of the sequence, but switches the discretization to a first-order accurate form which is more rapidly converged. MESHLEVEL = 2.0 refers to the second mesh in the sequence, i.e. the first coarse multigrid mesh. MESHLEVEL = 3.0 refers to the next coarser level, and so on. Since the coarse multigrid levels are based on agglomeration, a full second order discretization on these coarse levels is not possible, so it is important to remember that all MESHLEVEL > 1.0 grids are only first order accurate. CFLMIN and RAMPCYC are used to ramp up the CFL number for cases with start-up difficulty. The initial CFL number is given by CFLMIN, which is then ramped up to the final value CFL (Line 11) linearly over RAMPCYC cycles. TURBFREEZE has the effect of freezing the turbulence model after TURBFREEZE multigrid cycles. A value TURBFREEZE = 0.0 omits any freezing action, while a value TURBFREEZE = -1.0 initiates freezing immediately after initialization.
- Line 11 contains the following solver controls: CFL is the CFL number, which scales the local time-step size. The particular CFL value depends on whether residual smoothing is used (SMOOP and NCYCSM), and the number of Runge-Kutta stages (C1-C6). A value of CFL = 1.0 has been found to work best with the 3-stage scheme shown in this example. An alternate 5-stage scheme (with coefficients : C(1-5) = 0.25, 0.16667, 0.375, 0.5, 1.0, and FIL(1-5) = 1.0, 0.0, 0.56, 0.0, 0.44) works well with the value CFL = 2.5. CFLV is not functional in this version. Use the default value of 1000. ITACC is not functional in this version. INVBC selects the way in which the wall boundary condition is applied for slip velocity flows, such as those encountered in inviscid flows at walls, or when using wall functions. INVBC = 0.0 results in floating velocity vectors at the wall (not necessarily tangential), with vanishing normal flux specified through the wall, while INVBC = 1.0 explicitly sets the velocity vectors to be tangential to the wall at inviscid wall boundaries. ITWALL TWALL
- Line 13: VIS1 and VIS2 specify the artificial dissipation. Generally, VIS1 specifies the coefficient of first order dissipation (based on second differences) used only in the vicinity of shock waves, and VIS2 defines the level of background 2nd order accurate dissipation (based on fourth differences). Because the first-order dissipation can severely degrade overall accuracy if it is triggered near leading edges, it is common practice to set VIS1 = 0.0 which may produce some shock oscillations. The values VIS1 = 0.0 and VIS2 = 20. generally produces good overall accuracy. These values are independent of whether the artificial dissipation discretization or the upwind (matrix dissipation) discretization schemes are employed. HFACTOR specifies the amount of enthalpy damping to be used. Enthalpy damping is a technique to speed convergence for isenthalpic flows. For Navier-Stokes flow, enthalpy damping should be turned off: HFACTOR = 0.0 For inviscid flows, HFACTOR = 0.25 can be used. SMOOP and NCYCSM are not active in the current version of the solver.
- Line 15: C1 - C6 specify the Runge-Kutta coefficients for the multi-stage time-stepping scheme. In general, the 3-stage scheme described in this example is used and the values of these coefficients need not be changed. An alternate 5-stage scheme contains the values: C(1-5) = 0.25, 0.16667, 0.375, 0.5, 1.0, and FIL(1-5) = 1.0, 0.0, 0.56, 0.0, 0.44, and CFL = 2.5
- Line 17: FIL1 - FIL6 specify the coefficients for the dissipative terms for the multi-stage time-stepping scheme. The values of these coefficients need not be changed as long as the 3-stage scheme is employed. The values depicted above can used for the 5-stage scheme.
- Line 21: CFLC (Line 21) defines the CFL number used on the coarse multigrid levels. Generally CFLC should have the same value as CFL. CFLVC, SMOOPC and NSMOOC are not active in this version of the solver.
- Line 23: VIS0 determines the level of artificial dissipation on the coarse multigrid levels (first-order accurate only). Higher values of VIS0 will provide additional robustness at the expense of speed of convergence. The value VIS0 = 4.0 can be used almost exclusively, although values up to VIS0 = 6.0 can be used for additional robustness for difficult cases. MGCYC determines the type of multigrid cycle to be employed. MGCYC = 1.0 corresponds to a multigrid V-cycle, while MGCYC = 2.0 corresponds to a multigrid W-cycle. MGCYC = 2.0 is generally delivers faster convergence overall. SMOOMG and NSMOOMG determine the amount of smoothing applied to the coarse grid corrections after they are interpolated to the next finer grid level. This smoothing operation is similar to that employed for the implicit residual smoothing operation. SMOOMG and NSMOOMG therefore have meanings similar to SMOOP and NCYCSM. The optimal values have been found to be SMOOMG = 0.2 to 0.8, and NSMOOMG = 2.0. Higher values such as SMOOMG = 0.8 and NSMOOMG = 3.0 can occasionally be used for additional robustness (at the expense of speed of convergence).
- Line 27: ITURB selects the physical model or turbulence model to be used. ITURB = 0.0 results in an inviscid flow (Euler) computation. ITURB = 1.0 results in a laminar flow computation ( no turbulence effects). ITURB = 4.0 selects the Spalart-Allmaras one-equation turbulence model. IWALL should aways be set = 0.0 in this version.
- Line 29: CT1 - CT6 are the stage coefficients for the turbulence model on the fine grid. The turbulence model is solved simultaneously but decoupled from the flow equations. At each stage in the multi-stage flow time-stepping, a turbulence model iteration can be performed. Using more turbulence iterations than flow solution stages is not permitted. Unlike those for the flow solver, these turbulence stage coefficients can only take on 3 values: CT = 0.0 Omits time-stepping the turbulence equations at this stage. CT = 1.0 selects the tridiagonal line solver for the turbulence model at this stage. CT =-1.0 selects the point-wise solver for the turbulence model at this stage. In general, the value CT = 1.0 should be use at every stage corresponding to a flow solution stage.
- Line 31: CTC1 - CTC6 are the stage coefficients for the turbulence model on the coarse grids. These can take on the same values as described above for the CT fine grid coefficients. When all CTC = 0.0, only fine grid iterations are performed on the turbulence model.
- Line 33: VIST0 represents the amount of 1st order dissipation employed on the coarse grid levels for the turbulence model. This dissipation can make the multigrid procedure more robust by stabilizing coarse grid iterations, although this comes at the expense of slower overall convergence of the turbulence model. Values between 0.0 and 6.0 have been employed. TSMOOMG and NTSMOOMG are analogous to the SMOOMG NSMOOMG parameters described on Line 23. They determine the amount of smoothing applied to the coarse grid corrections for the turbulence model after they are interpolated to the next finer grid level. The optimal values have been found to be TSMOOMG = 0.2 to 0.8, and NTSMOOMG = 2.0. Higher values such as TSMOOMG = 0.8 and NTSMOOMG = 3.0 can occasionally be used for additional robustness (at the expense of speed of convergence).
- Line 36: sets the freestream flow conditions. For a new solution, the flow field is initialized as a uniform flow with these conditions, and the far-field boundary maintains these conditions throughout the solution phase. For a restarted solution, the outer boundary only is affected by these conditions. (Changing the Reynolds number affects the viscosity values in the simulation and is not related to boundary or initial conditions). MACH : sets the freestream Mach number. Z-ANGLE: sets the flow angle relative to the z-axis: for a coordinate system where y (or z) is spanwise, this corresponds to the yaw angle (or incidence angle). Y-ANGLE: sets the flow angle relative to the y-axis: for a coordinate system where y (or z) is spanwise, this corresponds to the incidence angle (or yaw angle). RE: sets the Reynolds number of the flow, based on the distance RE_LENGTH. Thus for RE_LENGTH = 1.0, a Reynolds number of RE per unit length in the grid dimensions is employed.
- Line 40 defines the values for the force coefficient calculation. These include a reference area (REF_AREA) in grid dimensions (squared), a reference length (REF_LENGTH) in grid dimensions, the location of the point about which the moment coefficients are to be computed (XMOMENT, YMOMENT, ZMOMENT) and a definition of which coordinate is the spanwise coordinate (ISPAN = 2.0 for y-spanwise, ISPAN=3.0 for z-spanwise), since this affects the definition of lift, drag and side-force.
- Line 43: specifies the directory for the partitioned grid files to be read by the solver. Only the directory name is specified here, not any individual files. The format is always set equal to 2.0.

This is done by specifying the name of the parameter, and its value on the same line.

The value must be in floating point decimal form, and must line up with the header: VALUE

The listed parameters and their values are output at the start of the run, providing the user with a check of the values read in.

The following is a list of all of the available optional parameters sorted by function with the typical values used for each.

This is an optional parameter and is only active for IPC_LOW_MACH = -1 In this case, BETA_MIN may take on values between 0 and 1. Generally (i.e. IPC_LOW_MACH = +1) low Mach number preconditioning uses the value BETA_MIN = min(1,3 x Mach**2) by default. Lower values of BETA_MIN may be less stable while providing more low Mach number preconditioning effect. Higher numbers produce the opposite, with the effect of low-Mach number preconditioning vanishes for BETA_MIN=1.

Generally, a 1st order Jacobian is used for the point or line implicit solution algorithm.

When using a biharmonic (artificial) dissipation scheme, there is no simple correspondance between the linearization of these terms and a 1st order (2nd difference dissipation) Jacobian. Therefore, we use the nearest neighbor entries from the biharmonic construction, and scale these by the factor BIHFACTR.

Not active on Nash IBL code.

Recommended value: 0.5

NOTE: FACTR_MG=1 recommended. Other values seldom used.

FD_DES = 1 will run DES97 (default)

FD_DES = -1 will run DDES2006

FD_DES = 0 will run the RANS

LIMIT_inf -> Monotone : Limiter FK

5.0 < FK_LIMIT < 100.0

Only active for : IFLUX_TYPE>0, and ILIM_TYPE = 1

ICHK_

ICHK_

eg. values used for LIM_GEOM points (values set in set_lim_values.f) BUT this OPTION requires nsmoo_limit(LIM_GEOM) = 0.

For Navier-Stokes flows, IFLUX_TYPE = 0 is recommended, as there are still issues of accuracy with the other schemes.

For Euler (inviscid) flows, IFLUX_TYPE = 1 is relatively robust for supersonic flows especially when using a limiter (see ILIM_TYPE).

(Recommend =0 Discard for this version)

(requires IFINE_BNDY_DISSIP .ne. -1) Recommended Value: ISAFE_LEVEL = 2

0 < SIGMA_DES < 1 is for the fixed scaling value.

SIGMA_DES = 1 will run the hybrid scheme(equation 6). However, the parameters in equation (6), (7), (8) and (9) need to further investigation to ensue the robustness of the scheme. This option is not recommended right now.

Standard Output: NSU3D creates an information and history file which is written to standard out. This will appear as scrolling screen output if NSU3D is run interactively. In batch mode, this will appear in the batch log output file. Alternatively, the standard output can be redirected to a file, denoted as nsu3d.out in the preceeding examples. The UNIX tee function can be used as shown in the preceeding section to simultaneously obtain the standard output log and to save this information to a file.

Working Directory:

All other NSU3D generated output is written into a working directory which is created by NSU3D upon startup if it does not already exist. In the distributed version of the code, this directory is called ./WRK and therefore the additional files are located in the WRK directory which is created under the current directory from which NSU3D is invoked. This directory name is configurable at compile time and is set in the init_io.f routine. (link hereSource Code Configurable Defaults) If the named working directory already exists, NSU3D will overwrite any existing files or directories with similar named files or directories created during the run. However, other contents of the named working directory are not removed prior to the NSU3D run.

Different outputs may be generated for different types of runs. Therefore, a description of the outputs for various run types is given below.

After the flow solution run is finished, the following files will be written into the WRK directory:

Additional files may be written for specific types of cases:

NCYC_CHECKPT < 0 or = 0 : Omit Checkpoint files iterations or time steps based on the value of the optional parameter NCYC_CHECKPTNCYC_CHECKPT: Write out checkpoint file after NCYC_CHECKPT cycles. Alternate checkpt files are written out: checkpt.1 every odd multiple of NCYC_CHECKPT and checkpt.2 every even multiple of NCYC_CHECKPT. Rather than overwriting the latest checkpt file, this approach avoids possible loss of latest checkpt file if failure occurs during checkpt write.

NCYC_CHECKPT < 0 or = 0 : Omit Checkpoint files.

A description of restarting procedures for NSU3D can be found hereSolver Restart Facility. To postprocess the current solution, the program postnsu3d must be used to reassemble the partitioned solution files. This can be done using the input.postnsu3d file generated by the NSU3D run and located in the WRK directory. A description of post processing procedures can be found herePost Processing. restart.out also contains all the history log files for the run, including any history from previous restart runs. This includes a history of residuals, force coefficients, and other quantities. A description of all history files is given hereAdditional History and Log Info Files

When a non-zero value of NCYC_CHECKPT is specified in the input parameter file, a solution restart directory named restart.# is written out every NCYC_CHECKPT iterations. The # in the restart.# directory name consists of a 6 digit integer referring to the iteration number of this run at which the solution restart directory was created. This number includes all iterations run on coarser grid levels using the MMESH parameter in the current run, and thus may not correspond to the fine grid iteration number. Additionally, the restart checkpointing feature is only active on the finest/last grid of the MMESH sequence.

Using the NCYC_CHECKPT parameter, multiple restart.# directories will be written out during a typical run, and these will be written to the current working directory (./WRK). At the end of the run, a final restart.out and restart.aux.out directories will also be written to the working directory (./WRK). Note that the auxiliary variables found in restart.aux.out are only written out at the end of the run and are not affected by the value of NCYC_CHECKPT.

An additional set of solution directories can also be written out during a time-dependent run. These directories, labeled solution.# consist of a reduced set of information which cannot be used to restart the solution process, but which is sufficient for visualizing the solution using standard visualization tools, and for making animations. These files may be useful for constructing time dependent animations where many solution instances are required because they typically contain less data than the complete restart directories. Output of the solution.# directories is controlled by the NTIME_STEP_OUT parameter specified in the input parameter list for NSU3D, which is a mandatory parameter to be included for time dependent runs. During a time dependent run, solution.# directories will be written to the current working directory (./WRK) every NTIME_STEP_OUT time steps, and the # character represents a six digit value of the time step at which the solution file was created. A final solution.# file is not necessarily produced at the end of the unsteady run. In order to guarantee the output of a solution.# directory at the final time step the total number of time steps should be evenly divisible by NTIME_STEP_OUT.

Values of NTIME_STEP_OUT equal to 0 or negative values will omit all solution.# output.

All history files are written in a format which is directly readable in TECPLOT. Header lines with run-time information are preceded by a # character which results in them being ignored by TECPLOT.

The set of files includes:

Residual histories and one representative force coefficient history (CL or CZ)

rplot.restart.out

History of wind axis force and moment coefficients (based on no slip boundary condition (RANS) or slip boundary condition (EULER):

force1.restart.out

History of wind axis force coefficients broken down by pressure and friction components (based on no slip boundary condition (RANS) or slip boundary condition (EULER):

force2.restart.out

History of grid/body axis force and moment coefficients (based on no slip boundary condition (RANS) or slip boundary condition (EULER):

force3.restart.out

Note: For ISPAN=1 cases, force3.* and force1.* values will be identical.

History of wind axis force coefficients broken down by geometry components (as specified in *.comp file used in mesh preprocessing):

comp_force.wind.restart.out

History of grid/body axis force coefficients broken down by geometry components (as specified in *.comp file used in mesh preprocessing):

comp_force.xyz.restart.out

History of various flow quantities based on iteration count:

history_ires.restart.out

History of various flow quantities based on time step count:

history_istep.restart.out

NOTE: These files will be empty for steady-state runs. For time-dependent runs, the *ires.out files record the histories at each subiteration throughout the entire run, while the *istep files record the histories only at each time step.

A total of 64 quantities are monitored in the history* files.

A description of these quantities can be found here (link coming soon).

-------------------------------------------------------------

The rplot, force and history files are constructed and logged as follows during an NSU3D run:

- The values for the current run are temporarily stored in memory as they accumulate and then periodically output to the *.out files. During the run in this manner, the *.out files are located in the ./WRK directory (as opposed to the ./WRK/restart.out directory). These files can be used to plot convergence history while the solver is still running, in order to assess whether the solver should be terminated (using the STOP_RUN file (link)).
- The frequency for flushing the output to the *.out files is controlled by hardwired parameters (kflush_ires, kflush_istep, kflush_comp_force) which are set in the routine init_io.f. If periodic flushing is not desired, it can be turned off by setting these values to be vary large. However, the dimension of the buffers used to store this info within NSU3D must also be set large enough. Flushing will automatically occur when the size of the buffers exceeds the memory space allocated to these buffers. Buffer sizes are set through parameters mcount_ires, mcount_istep, mcount_comp_force which are set in common_dynamic.f. Flushing is also triggered when ever an intermediate runtime restart file (i.e. checkpointing in response to NCYC_CHECKPT parameter) is written out.
- At the end of the NSU3D run, the *.out files are moved from the working directory (./WRK) to the restart.out directory. At the same time, if the current run was restarted from a previous solution, the history *.restart.out files contained in the directory used to restart the current solution are copied to the current restart.out directory and the history information from the current run is appended to the restart history information to produce the *.restart.out history files for the current run. In this manner, the *.restart.out files contain all the history including all previous restart runs used to produce the final solution. All history files are thus associated with the current solution values contained in the current restart.out directory and are therefore archived in that directory. This permits their use in future restart runs based on this solution. Additionally, all history files are removed from the current working directory (./WRK) since they are now archived in the restart.out subdirectory. For intermediate runtime restart outputs (i.e. solution checkpointing controlled by the NCYC_CHECKPT parameter), the requisite history files and restart history files are written to the intermediate restart directories labeled restart.#, where # denotes the cycle number of the run.
- The names for all output directories and files are set in the source code in init_io.f These can be modified at compile time if alternate file names are desired. A list of source code configurable settings can be found here (link).

"NCYC"

Cycle number (iterations or subiteration for time-dependent runs)

based on the current run alone.

"NCYC_TOTAL"

Cycle number (iterations or subiteration for time-dependent runs)

based on all previous restart runs.

"NSTEP"

Time step number based on the current run alone.

"NSTEP_TOTAL"

Time step number based on all restart runs.

"ICYC"

Current counter for loop executed in NSU3D

(can be either time step loop, or subiteration loop)

"TIME"

Current time value (number of time steps multiplied by time step size)

for this run (initialized to 0.0 at beginning of current run).

"TIME_TOTAL"

Total time value including all restart time values.

"CPUTIME"

Current cpu time consumed by this run.

"CPUTIME_TOTAL"

Cumulative cpu time consumed by this run

and all previous restart runs.

"DELTA_RHO"

RMS Density delta (corrections or changes) generated at this iteration.

"HRMS"

RMS value of H-H0 where H0 is freestream enthalpy.

For enthalpy preserving scheme this should go to zero.

However, for RANS cases this does not vanish.

"DELTA_TURB1"

RMS Turbulence delta (corrections or changes) for first turbulence equation generated at this iteration.

Measure of convergence of first turbulence equation.

"DELTA_TURB2"

RMS Turbulence delta (corrections or changes) for second turbulence equation generated at this iteration.

Measure of convergence of second turbulence equation.

"NSUPERSONIC_PTS"

Number of supersonic points in flow field

"MAX_DELTA_RHO"

Maximum value of DELTA_RHO in flow field.

"XMAX_DELTA_RHO"

X-coordinate location of MAX_DELTA_RHO in flow field.

"YMAX_DELTA_RHO"

Y-coordinate location of MAX_DELTA_RHO in flow field.

"ZMAX_DELTA_RHO"

Z-coordinate location of MAX_DELTA_RHO in flow field.

"MAX_EDDY_VISC"

Maximum value of eddy viscosity in flow field.

"NWALLPTS_TEMP_RELAX"

Number of wall points being relaxed for specified wall temperature cases.

"RESID_RHO_TIMEACC"

RMS average of time-dependent residual

computed as (Vol*w^(n+1) - Vol*w^n)/DT + R(w)) -->0 at convergence.

for density equation.

"RESID_RHO_TIMEACC/VOL"

RMS average of time-dependent residual RESID_RHO_TIMEACC divided by volume

"DRHO/DT*VOL"

RMS average of Time derivative of density multiplied by cell volume for unsteady runs.

(Computed as (rho^(n+1) - rho^n)/DT)*vol

"DRHO/DT"

RMS average of Time derivative of density for unsteady runs.

(Computed as (rho^(n+1) - rho^n)/DT)

"RESID_TURB1_TIMEACC"

RMS average of time-dependent residual

computed as (Vol*w^(n+1) - Vol*w^n)/DT + R(w)) -->0 at convergence.

for first turbulence equation.

"RESID_TURB1_TIMEACC/VOL"

RMS average of time-dependent residual RESID_TURB1_TIMEACC divided by volume

"DTURB1/DT*VOL"

RMS average of Time derivative of first turbulence equation multiplied by cell volume for unsteady runs.

(Computed as (rho^(n+1) - rho^n)/DT)*vol

"DTURB1/DT"

RMS average of Time derivative of first turbulence equation for unsteady runs.

(Computed as (rho^(n+1) - rho^n)/DT)

"CX"

Force coefficient in X-coordinate direction (based on boundary condition type INSBC

"CY"

Force coefficient in Y-coordinate direction (based on boundary condition type INSBC

"CZ"

Force coefficient in Y-coordinate direction (based on boundary condition type INSBC

"CXP"

Pressure Force coefficient in X-coordinate direction (based on boundary condition type INSBC

"CYP"

Pressure Force coefficient in Y-coordinate direction (based on boundary condition type INSBC

"CZP"

Pressure Force coefficient in Z-coordinate direction (based on boundary condition type INSBC

"CXF"

Friction Force coefficient in X-coordinate direction (based on boundary condition type INSBC

"CYF"

Friction Force coefficient in Y-coordinate direction (based on boundary condition type INSBC

"CZF"

Friction Force coefficient in Z-coordinate direction (based on boundary condition type INSBC

"CMX"

Moment coefficient about X-axis (based on boundary condition type INSBC

"CMY"

Moment coefficient about Y-axis (based on boundary condition type INSBC

"CMZ"

Moment coefficient about Z-axis (based on boundary condition type INSBC

"CMXP"

Pressure Moment coefficient about X-axis (based on boundary condition type INSBC

"CMYP"

Pressure Moment coefficient about Y-axis (based on boundary condition type INSBC

"CMZP"

Pressure Moment coefficient about Z-axis (based on boundary condition type INSBC

"CMXF"

Friction Moment coefficient about X-axis (based on boundary condition type INSBC

"CMYF"

Friction Moment coefficient about Y-axis (based on boundary condition type INSBC

"CMZF"

Friction Moment coefficient about Z-axis (based on boundary condition type INSBC

"CL"

Lift coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CD"

Drag coefficient (defined by ISPAN (link) wind axes) (based on boundary condition type INSBC

"CSIDE"

Side Force coefficient (defined by ISPAN (link) wind axes) (based on boundary condition type INSBC

"CLP"

Pressure Lift coefficient (defined by ISPAN ) wind axes) (based on boundary condition type INSBC

"CDP"

Pressure Drag coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CSIDEP"

Pressure Side Force coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CLF"

Friction Lift coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CDF"

Friction Drag coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CSIDEF"

Friction Side Force coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CM_PITCH"

Pitching Moment coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CM_YAW"

Yaw Moment coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CM_ROLL"

Roll Moment coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CM_PITCHP"

Pressure Pitching Moment coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CM_YAWP"

Pressure Yaw Moment coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CM_ROLLP"

Pressure Roll Moment coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CM_PITCHF"

Friction Pitching Moment coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CM_YAWF"

Friction Yaw Moment coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

"CM_ROLLF"

Friction Roll Moment coefficient (defined by ISPAN wind axes) (based on boundary condition type INSBC

common_dynamic.f

init_io.f

set_lim_values.f

------------------------------------------------------------

common_dynamic.f

This file contains some hardwired dimensions for NSU3D memory allocation. Although the current values should be best for most applications, occasionally these may be modified for special cases:

parameter(mmesh=10) ---> Maximum number of meshes in multigrid

parameter(mpart_files=2048) ---> Maximum number of partitions in grid file

parameter(mpart_loc=512) ---> Maximum number of partitions per processor

parameter(mbctyp=40) ---> Maximum number of boundary condition instances

parameter(mstage=10) ---> Maximum number of Runge-Kutta stages

parameter(mbclist=8) ---> Maximum number of boundary condition types

parameter(mcwords=25) ---> Maximum number of words in mpi buffers (fixed)

parameter(mcount_ires=1000) ---> Maximum buffer size for iteration based histories

parameter(mcount_istep=1000) ---> Maximum buffer size for time-step based histories

parameter(mcount_comp_force=1000) --> Maximum buffer size for storing force histories

Note that the buffer sizes for ires, istep and component forces contain numerous entries (values to be stored) per iteration

of time step, therefore these should not be set indefinitely large since this will require large memory resources.

------------------------------------------------------------------------------------------

init_io.f

This routine contains parameters and file names for I/O related functions in NSU3D.

Configurable parameters:

IRESTART_LOG = 1 --> Log Restart Names by Local(=0) or Global(=1) step/cycle number When intermediate restart/solution files are output, these are given the name restart.# where # corresponds to the time step or iteration number (depending on whether this is a time dependent or steady-state run, respectively). The numbering scheme can be based on the local time step or iteration number, meaning the number based on the current run alone (starting at 1 in the current run) or the global number meaning the number including all restart time steps/iterations which proceeded the current run in cases where the current run has been restarted from a previous solution.

IO_SYSTEMCALL = 2 !Possible Values: 0,1,2

NSU3D performs various system calls principally for I/O functions, such as creating directories and copying/appending history files. This parameter either suppresses any information concerning system calls (=0) or writes out information to standard out concerning these system calls (=1,=2). When set =2, each message will be prefaced by ">>Syscall: " in order to more clearly identify these messages.

IO_CLEAN = 1 !Possible Values: 0=No Clean, 1=Clean

When set =1, this causes all history files (i.e. rplot.out, rplot.history.out etc..) to be removed from the current working directory at the end of the run, since these are logged in the restart.out subdirectory. Files are left in both restart.out and the current working directory for IO_CLEAN=0. Files with names which do not correspond to any of these history files are not removed from the working directory in either case.

IO_SLEEP = 2

On various hardware platforms there can be a delay in the NSF file system cache updating procedure which may cause NSU3D to fail when trying to write to a directory which was created by one processor, but which has not been cached in NFS and is thus still not visible to all other processors. This parameter causes NSU3D to wait for IO_SLEEP seconds after creating the directory to allow NFS to catch up and cache the directory. If NSU3D fails on I/O operations try increasing this value. The value will be hardware dependent.

kflush_ires = 300

History information based on iteration count is periodically flushed to the history files written in the current working directory (and ultimately copied to the restart.out subdirectory). This parameter determines the flushing frequency (i.e. flush very 300 iterations).

kflush_istep = 2

History information based on time-step count (for time-dependent simulations) is periodically flushed to the history files written in the current working directory (and ultimately copied to the restart.out subdirectory). This parameter determines the flushing frequency (i.e. flush very 2 time steps). Since a time step contains numerous subiterations, the time-step flushing frequency is typically smaller than the iteration-based flushing frequency, thus allowing a suitable wall-clock update frequency of the history files.

kflush_comp_force = 300

History information for component based forces is periodically flushed to the history files written in the current working directory (and ultimately copied to the restart.out subdirectory). This parameter determines the flushing frequency (i.e. flush very 300 iterations).

OUTPUT_WRK_DIRNAME1 = './WRK'

The working directory name is set using this parameter. Default is ./WRK which creates a working directory WRK under the current directory. All history and restart files will be written to this directory. This directory is also overwritten for new runs if it exists previously.

RESTART_DIRNAME1 = 'restart.out'

RESTART_DIRNAME2 = 'restart.aux.out'

SOLUTION_DIRNAME1 = 'solution.out'

SPARSE_DIRNAME1 = 'sparse.out'

The names for the directories for restart and solution files are set here. All other file names are also set in init_io.f Descriptions of these files are self explanatory or are covered elsewhere in the documentation.

Other file names:

FILENAME0(11) = 'rplot.out'

FILENAME0(12) = 'rplot.restart.out'

FILENAME0(13) = 'force1.out'

FILENAME0(14) = 'force2.out'

FILENAME0(15) = 'force3.out'

FILENAME0(16) = 'force1.restart.out'

FILENAME0(17) = 'force2.restart.out'

FILENAME0(18) = 'force3.restart.out'

FILENAME0(19) = 'history_ires.out'

FILENAME0(20) = 'history_ires.restart.out'

FILENAME0(21) = 'history_istep.out'

FILENAME0(22) = 'history_istep.restart.out'

FILENAME0(23) = 'comp_force.xyz.out'

FILENAME0(24) = 'comp_force.wind.out'

FILENAME0(25) = 'comp_force.xyz.restart.out'

FILENAME0(26) = 'comp_force.wind.restart.out'

FILENAME0(27) = 'ibl.stations.tec.proc.'

FILENAME0(28) = 'ibl.log.proc.'

FILENAME0(29) = 'massflobc.'

FILENAME0(51) = 'input.postnsu3d'

FILENAME0(52) = 'input.turb.postnsu3d'

FILENAME0(53) = 'endstat1'

FILENAME0(54) = 'endstat2'

--Actuator Disk Files (optional)

FILENAME0(61) = 'smemrd_nodes.nam'

FILENAME0(62) = 'smemrd_nodes.p3d'

FILENAME0(63) = 'smemrd_grid.p3d'

FILENAME0(64) = 'smemrd_q.p3d'

FILENAME0(65) = 'smemrd_force.p3d'

FILENAME0(66) = 'smemrd_data.nam'

FILENAME0(67) = 'smemrd_data_'

FILENAME0(68) = 'smemrd_flow.nam'

FILENAME0(69) = 'smemrd_flow_'

-------------------------------------------------------------------------------------------

set_lim_values.f

This routine sets the default values for all optional parameters. Because all optional parameters can be specified in the nsu3d input file, a description of these parameters is contained here.

Please refer to this description for each parameter.

STEADY-STATE RUNS:

To restart a steady state run, specify the restart directory name in the input parameter file under the RESTART FILE heading (link).

The restart facility is then activated using the parameters RESTARTF and RESTARTT (link).

RESTARTF = 0.0 bypasses the restart function and flow variables are initialize to freestream.

RESTARTF = 1.0 results in the flow field being initialized by the values in the specified restart file/directory.

RESTARTT = 1.0 is used to specify the initial turbulence values from the restart file/directory (only when RESTARTF=1.0)

RESTARTT = 0.0 is used to set initial turbulence values as freestream values.

RNTCYC is no longer functional and should be set = 0.0

To produce an exact restart, use RESTARTF = 1.0 and RESTARTT = 1.0

To restart only flow field values and use freestream turbulence values use RESTARTF = 1.0 and RESTARTT = 0.0

UNSTEADY or TIME-DEPENDENT RUNS:

The restart procedure for time dependent runs is similar to that described above for steady-state runs. However, for time-dependent runs using ITACC=2 or 3 (link) (BDF2 or BDF3) additional time level information is required to produce an exact restart with no loss in time accuracy. If the specified restart file was produced from a run using ITACC=2 or 3, then the necessary information for a fully time accurate restart run is available in the restart file and the restart will be exact.

However, if the restart file was generated from a previous steady state run, or from a run with an ITACC value smaller than the current ITACC value, then the restart file does not contain sufficient information to enable an exact restart. In this case, a restart using a lower time accuracy on the first time step can be achieved by specifying: RESTARTF = -1.0 For example, if restarting from a steady-state solution but specifying ITACC=2.0 RESTARTF= -1.0 must be used. In this case, the first time step will consist of a BDF1 time step (i.e. first order accuracy in time, equivalent to ITACC=1.0) and all subsequent time steps will revert to BDF2 (ITACC=2) time accuracy.

In the case where the restart file was generated using the same ITACC value as currently set in the new run, but a different time step value was used, RESTARTF = -1.0 must also be specified, resulting in the first time step being computed using BDF1.

The type of time discretization at each time step (BDF1, BDF2, BDF3) is output to standard out, providing a means for checking the operation of the restart facility on the first several time steps.