淘宝官方店     推荐课程     在线工具     联系方式     关于我们  
 
 

微波射频仿真设计   Ansoft Designer 中文培训教程   |   HFSS视频培训教程套装

 

Agilent ADS 视频培训教程   |   CST微波工作室视频教程   |   AWR Microwave Office

          首页 >> Ansoft Designer >> Ansoft Designer在线帮助文档


Ansoft Designer / Ansys Designer 在线帮助文档:


Nexxim Simulator >
Nexxim Component Models >
S-Parameter Elements >
   S-Parameter Technical Notes       

S-Parameter Technical Notes

Reference Nodes on S-Parameter Elements

When an S-parameter element is inserted in the schematic, the dialog allows you to specify a reference node (see Creating an N-Port Model in the Schematic Editor topic.) The two choices are Implied reference to ground and Common reference pin.

When implied reference to ground is selected, or when Common reference pin is chosen and then connected or shorted directly to ground in the schematic, Nexxim does not make any adjustments to the S-parameter matrix.

When a Common reference pin is unconnected or is connected to ground through a resistor, Nexxim uses matrix operations to adjust the S-parameters. When a Common reference pin is left unconnected, Nexxim connects the pin to ground through a resistor with value 1012 ohms.

The formulas below are for the frequency domain at a single frequency.

It is convenient to convert from S-parameters to the equivalent Z parameters. The Z-matrix takes the vector of input currents into the vector of terminal voltages: V = Z * I.

Suppose that the S-parameters all have a common reference impedance Zref and that the resistor inserted between the reference node of the S-parameter block and ground is of value R>0.

Let matrix A be the 2x2 matrix with the value R/Zref in each of the four locations. Then the effective Z-matrix of the structure with the resistor Zeff = Z + A.

Put in the form of matrix manipulations for the S-parameters:

 

[spacer]

 

 

where I is the 2x2 identity matrix.

State-Space Method

S-parameters describe a system in terms of its frequency-dependent responses. Nexxim’s state-space model assumes a linear time-invariant (LTI) system with a vector of inputs u(t) of length P and a vector of outputs y(t) of length Q:

The state-space model represents the data as a vector of N state variables x(t) representing the responses of the system for all inputs in the specified range, and four state-space matrices, A, B, C, and D. See State Space References [1] in S-Parameter References for an introduction to state space modeling.

State Equations and Output Equations

The vector of state equations:

 

[spacer]

 

where:

 

 

gives the rate of change of the state variables (responses), computed as a weighted sum of the current state vector and the input vector.

Matrix A, the state matrix, contains the frequency-dependent poles of the system. Matrix A has dimension N by N. Matrix B, the input matrix, contains the weights applied to the input vector. Matrix B has dimension P by N.

From the time derivatives of the state variables x(t) one may calculate the values of x(t + Dt) using the vector of output equations:

 

 

The output vector is a weighted sum of the current state vector and the input vector. Matrix C, the output matrix, contains the weights applied to the state vector. Matrix C has dimension Q by N. Matrix D, the feedforward matrix, contains weights applied to the input vector. Matrix D has dimensions Q by P.

Transfer Function

The transfer function is the ratio of the output to the input. In Laplace space s, the transfer function can be expressed in terms of the state space matrices:

 

[spacer]

 

 

where sI is an N by N matrix with s on the diagonal and zeros elsewhere.

H(s) is a matrix of frequency-dependent transfer functions with dimension Q by P, one output for every input.

Impulse Response and Stability

The impulse response h(t) is the response of the system to the delta function, a pulse of vanishingly narrow width. The impulse response is the inverse Lagrangian of the transfer function, and can be expressed in terms of the poles of the system pi (the entries of matrix A).

 

[spacer]

 

 

 

Bounded-In, Bounded-Out (BIBO) stability specifies that as long as the input is bounded, the output must be bounded. For BIBO stability, the exponential factor ept must decay rather than increase. Thus, all the poles pi must have negative real parts.

For BIBO stability, the norm of the impulse response must always be finite:

 

[spacer]

 

 

State Space Fitting Methods

By default, Nexxim uses an Ansoft-proprietary fitting method, the Tsuk-White Algorithm (TWA) for calculating the state space matrices. The TWA algorithm for state-space fitting is an alternative to rational fitting. TWA generates the poles of the state-space fit from the singular value decomposition of Loewner matrices derived from the input data [State-Space Reference 2]. Unlike rational fitting, TWA is not an iterative procedure. Thus, TWA can produce high-quality state-space fits in much less time than rational fitting.

The TWA algorithm may succeed where rational fitting produces a bad fit or a fit that is good but nonpassive on a problem too big for passivity enforcement.

The option:

s_element.twa=1|0

can be used to toggle between the TWA algorithm (1) and the rational fitting method (0).

Causality Checking and Enforcement

This topic explains how Nexxim handles causality checking and enforcement of S-parameter data.

Any physical, realizable system must be causal, that is, its response cannot precede the excitation that produces the response. Although causality is a consideration in any physical system, the focus for signal and power integrity simulations is on linear, time-invariant (LTI) systems, a category that includes (but is not limited to) interconnects, packages, connectors, and board-level systems.
LTI systems are typically simulated from frequency response data, thus ensuring the causality of frequency response data is a key issue.

Nexxim can check for causality violations, and can correct causality problems. The basic strategy for both checking and correction is to compute a known-causal reconstruction of the data, and then to compare the reconstruction error to a known error bound or threshold for noncausality over all frequencies. In Nexxim the threshold for noncausality is proportional to the overall tolerance for errors in the rational fit.

An Example of a Noncausal System

The diagram below shows the internal results of the Nexxim causality checker with a transmission line. NOTE: These internally-computed values are not available for plotting in Designer.

The error bound for noncausality is the solid magenta line. The maximum value for the error bound has been set to 2x10-3. The reconstruction error is the dashed blue line. At the lowest frequencies, the reconstruction error exceeds the threshold, indicating a causality violation. Nexxim reports the percent error as the difference between the two values, times 100.

Symptoms of Noncausality

Problems with causality of frequency response data show up as errors in the accuracy and stability of the simulation.

Causality and Accuracy

Before starting transient simulation, Nexxim calculates a causal macromodel that approximates the original data in the frequency domain. If the original data are noncausal, the approximation by the macromodel may not be accurate. The more severe the noncausality, the higher will be the fitting error of the approximation. If the fitting error is greater than 10%, transient analysis will not even begin. Even when transient runs, a fitting error greater than 1% can give results that are inaccurate, often to a degree that is unacceptable.

Causality and Stability

There is a strong correlation between causality issues and passivity violations. Often, the fitting error is acceptable, but the macromodel is nonpassive. A passive system does not generate energy; it can only dissipate or transform energy. In the frequency domain, the scattering or S-parameters of passive systems (including macromodels) have singular values that are less than or equal to 1 at all frequencies . When a macromodel is nonpassive (maximum singular value >1.00), there is a potential for instability, with unbounded values for node voltages and branch currents. Terminations that do not match the input impedance increase the probability of instability, even with modest nonpassivity.

Unstable results can be avoided if the macromodel is passive, or can be made passive. Thus, one possible solution for instability issues is to try passivity enforcement on the macromodel before running transient again. However, passivity enforcement increases the runtime, and the degree of noncausality adds to the problem. To obtain an acceptable fit with noncausal data, the macromodel may require a large model order (number of poles > 200). Model order has a direct effect on the passivity correction runtime. A second, related issue is memory usage. Passivity correction encounters memory problems for systems with large numbers of I/O ports (>200). When the model order is also high due to noncausality, passivity enforcement is at high risk of running out of memory, which will cause transient to terminate without simulating anything. Memory problems during passivity enforcement are common with package models for memory subsystems.

In addition, passivity enforcement affects the accuracy of the result. If the calculated passive macromodel has a fitting error (to the original data) of more than 10%, transient simulation will not begin. The higher the nonpassivity (as measured by the maximum singular value), the greater chance that the macromodel will be inaccurate. Hence, strong causality problems typically lead to poorly-fitting passivity-enforced macromodels.

Many companies now require that all frequency-response data coming from commercial field solvers be certified to be causal.

Sources of Noncausality

Touchstone data come from three different sources: from measurements, from field solvers (2-D, 3-D, or 2.5-D), and from circuit solvers. Any of these sources can introduce noncausality into data. This tech note analyzes causality issues in data from field solvers and circuit solvers.

Causality Issues with Field Solvers

There are many sources of noncausality in data from field solvers.

• Constant loss-tangent models for modeling frequency-dependent dielectric behavior often contributed to noncausality in the data. Fortunately, the Debye and Djordevic-Sarkar models for dielectric constants have eliminated this source of noncausality.

• Subsystems in field solvers can have noncausal models.

• Discontinuity of the field solutions can produce noncausal data.

• Noncausal interpolating functions during interpolation sweep are a frequent source of problems.

• Large inaccuracies in the discrete sweep solution at a frequency can also introduce noncausality.

Causality Issues with Circuit Solvers

Circuit solvers, too, can introduce noncausality into Touchstone data. Most of the noncausality comes from components meant to work at radio or microwave frequencies. A typical example of such a component is the transmission line model (e.g., coaxial cable). The dielectric loss of the lines is still modeled by constant loss-tangent models (conductance is made to vary linearly with frequency, while capacitance is held constant), and the conductor loss by variations, while making the inductance constant with frequency.

Testing for Causality

Causality imposes constraints on the impulse response and the frequency response of an LTI system. The constraints are the basis for causality checking the S-parameter data for an LTI system read from a Touchstone file.

Causality Constraints on the Impulse Response

Consider a system with impulse response h(t) that is excited by a signal x(t), producing an output y(t). For an LTI system, these quantities are related through the convolution integral:

 

(1)

where the operator (*) denotes a linear convolution.

A causal system must be nonanticipatory, that is, y(t) from equation (1) should depend on x(t) for
τ <= t. Restating this requirement, a time domain signal h(t) is causal if:

 

(2)

 

By equation (2), a causal h(t) is sufficient to ensure that y(t) does not precede x(t). It can be shown that a causal h(t) is both necessary and sufficient.

Causality Constraints on the Frequency Response

The frequency-domain response H(jw) is the Fourier transform of time-domain signal h(t):

 

(3)

 

where F{} denotes the Fourier transform, j2 = -1, and w is the angular frequency.

The constraints on a causal frequency response can be derived as follows. A causal h(t) satisfies the condition:

 

(4)

 

where the function sign(t) returns 1 for t>0, 0 for t=0, and -1 for t<0.

Taking the Fourier transform of equation (4), we obtain:

 

(5)

 

where F{} denotes the Fourier transform, and the integral is defined according to the Cauchy’s principal value (p.v.). The right-hand side (RHS) of equation (5) is known as the Hilbert transform of H(jw).

The Hilbert Transform Test for Causality

The frequency response H(jw) can be separated into its real and imaginary parts:

 

[spacer]

 

 

[spacer]

 

 

 

(6)

 

From equation (6), the real and imaginary parts of a causal H(jw) are Hilbert transforms of each other. If a frequency response satisfies this constraint, it is declared causal.

Let R0 denote the operator in the RHS of (5):

 

(7)

 

Then from equation (5) a causal frequency response maps onto itself:

 

(8)

 

The Generalized Hilbert Transform Test for Causality

The conditions in (6) are sufficient for causality, but are not necessary. In particular, the constraints in (6) are true only for a response H(jw) that is square-integrable. The function H(jw) is square-integrable if:

 

(9)

 

The constraints in (6) have been shown to be necessary and sufficient conditions for causality with square-integrable functions (see Causality Reference [3]). An example of a square-integrable function is the driving-point impedance of a parallel RC circuit:

 

[spacer]

 

 

where G=1/R and C are positive real constants.

Here are example functions that are are not square-integrable (R, L, and C are positive real constants). These functions are causal but do not satisfy (6):

1.

2.

3.

4. Any linear combination of 1, 2, and 3.

For functions that are not square-integrable, the necessary and sufficient conditions for causality are given by the generalized Hilbert transform (see Causality Reference [3]). The generalized Hilbert transform of H(jw), Hn(jw), is given by:

 

(10)

 

 

where n is the number of subtraction points, and

 

[spacer]

 

represents the subtraction points spread over the available bandwidth W (see Testing for Causality of Touchstone Data for details on W), and LH(jw) is the Lagrangian interpolation polynomial for H(jw) (see Causality Reference [6]):

 

(11)

 

 

The Hilbert transform is a special case of the Generalized Hilbert transform when n=0, hence the term “Generalized.” From equations (10) and (11), if n = 0 then LH(jw) = 0, and the P terms in (10) disappear. The resulting equation is the Hilbert transform of H(jw), equation (8).

Let Rn denote the reconstruction operator in the RHS of (10):

 

(12)

 

 

 

 

 

Then a causal frequency response, square-integrable or not, maps onto itself under Rn:

 

(13)

 

Equation (13) can be used to test whether or not a frequency response is causal. A causal frequency response will map onto itself under the operator Rn, but a noncausal response will not.

A further constraint on a causal frequency response is that all its network transforms must also be causal. That is, if H(jw) is a scattering parameter and is causal, then the corresponding impedance and admittance parameter must also be causal.

Truncation and Discretization Errors in Causality Testing

Causality of a frequency response can be tested by verifying whether or not the frequency response satisfies (13): a causal response will map onto itself if the operator Rn is applied. On the other hand, if the frequency response is noncausal, then it will not map onto itself and hence will not satisfy (13). However, the test in (13) is valid only when H(jw) is known continuously for all frequencies . Frequency responses in the form of Touchstone data are known only at discrete frequencies in the bandwidth , where wmax is the maximum available angular frequency in the data. Therefore, Equation (13) is not sufficient as a test of causality in Touchstone data.

One possibility to check for the causality of Touchstone data is to use the test in (13), but limit the integral in (12) to W, ignoring the contributions of the out-of-band integral, and interpolate H(jw) within W, so that (12) can be computed. However, the results of this new test are not reliable, due to two sources of error. Error introduced by ignoring the out-of-band contribution is called the truncation error. Errors introduced when approximating a continuous response from data at discrete frequencies is called the discretization error. The unreliability that results from these two errors can lead one to falsely declare a causal response as noncausal (false positive) or a noncausal response as causal (false negative). Such false detections should be avoided. Therefore, for tabulated data, a modified test based on (13) that is also reliable is needed. Since truncation and discretization errors are the sources of unreliability, the modified test has to account for both.

Accounting for the Truncation Error

To compensate for the missing out-of-band data, the following formulas can be applied. Separating (12) into in-band and out-of-band contributions, we obtain:

 

 

(14)

 

 

 

 

 

In (14), the integration term in WC is the out-of-band contribution to H(jw). The Cauchy’s integral in this term can be replaced by a regular integral, since the integrand does not have a singularity for .

We can separate the modified integral to reflect the separate contributions from H(jw) and LH(jw), respectively:

 

(15)

 

 

 

 

 

 

 

 

The integration term for LH(jw) can be abbreviated as Cn(jw). Since LH is an nth-order polynomial, a closed-form expression for Cn(jw) can be derived.

The integration term for H(jw) can be abbreviated as -En(jw). This is the truncation error. Since H(jw) is not defined outside W, En(jw) cannot be evaluated directly. However, the error introduced by omitting this term can be quantified by fixing the behavior of H(jw) in WC. If:

 

(16)

 

where M is a real constant, , a = 1, 2, 3,..., then it can be shown that:

 

(17)

 

where Tn is the truncation error bound:

 

(18)

 

 

 

 

 

The bound in (18) has been shown to be tight (see Causality Reference [5]). This bound can be made small by increasing the number of subtraction points, n.

Due to its Lagrange-polynomial nature, Tn(w) is bounded only for frequencies , where:

 

(19)

 

and where e is a small number (<0.1). It can be seen from (18) that Tn(w) decreases as n increases. From (17), when Tn(w) is reduced, so is , the truncation error. Thus, the disadvantage of not knowing the frequency response outside of the bandwidth (i.e., in WC) can be offset by increasing the number of subtraction points, n.

The best approximation to Hn(jw), , is obtained by ignoring the En term in (15) and by increasing n such that Tn(w) is kept smaller than the user-specified causality tolerance, dnc. The approximation is given by:

 

(20)

 

 

 

 

 

Accounting for the Discretization Error

Next, we consider how to evaluate the in-band contribution, the second term in (20). Since the functional form of H(jw) is not known in W, this integration cannot be evaluated analytically. Since the integrand is singular for w’ = w, numerical integration will not be robust unless the integrand is regularized. Simple integration methods such as the Trapezoidal rule and Simpson’s method can be employed. If is the numerical approximation to , then the error in this approximation, Dn(jw), is

 

(21)

 

The bound for the discretization error is obtained by subtracting the estimates computed using two different integration rules, and is given by:

 

(22)

 

where n1 and n2 are two different integration rules. In Nexxim, the two rules are Simpson’s method and the Trapezoidal rule, respectively.The discretization error is reduced if the bound is reduced. The bound is reduced by decreasing the frequency step in the tabulated frequencies.

The Causality Test with Minimized Errors

Now we can construct a causality test that minimizes both truncation and discretization errors. Let denote the numerical reconstruction error and the ideal reconstruction error:

 

(23)

 

(24)

 

Then can be shown to be:

 

(25)

 

 

 

 

 

 

 

Therefore, the reconstruction error satisfies the following inequality for any H(jw).

 

(26)

 

 

If and only if H(jw) is causal, the error vanishes at all frequencies.

Let be the total error bound, the sum of the discretization and truncation error bounds

 

(27)

 

Thus for a causal signal, the reconstruction error never exceeds the total error bound::

 

(28)

 

When this condition is met, any causality violations in H(jw) are too small to be flagged, and the data are declared causal.

 

If, on the other hand, the reconstruction error exceeds the threshold at any frequency:

 

(29)

 

then H(jw) is declared to be noncausal. Nexxim reports the causality as a percent error:

 

[spacer]

 

 

Fixing the Values for The Total Error Bound

Since Tn varies with n, the total error bound must also vary with n. Also, since varies with frequency step, must vary with frequency step as well. It is important to set to reasonable values. Very large values for can hide causality violations, while very small values can flag any response as noncausal.

The following strategy assigns reasonable values to . Typically, the maximum tolerable magnitude of noncausality, , is known a priori. In previous versions, Nexxim set to 10-4 (0.01%), and the value could not be changed by the user. Experimentation has shown, however, that a tolerance of 0.01% is too conservative: macromodels with acceptable fitting errors are flagged as noncausal. Since the success of the macromodel fitting process is the ultimate goal, the causality tolerance is tied by default to the tolerance for the fitting error, . These tolerances are linearly related:

 

(30)

 

where . Now, a reasonable value for b must be found. CAD tools commonly set to 0.01 (1%) for S-parameters. To keep greater than 0.0001, b must be greater than 0.01. A reasonable value for b should be in the range . In Nexxim, b is chosen to be 0.2.

If the value of is set by the user (with the option causality_check_tolerance), that value is used and the relationship in equation (30) is not considered.

Once the tolerance for noncausality has been determined, a reasonable n is chosen such that the truncation error is always less than the tolerance by a fixed proportion:

 

(31)

 

In practice, equation (31) can be ensured only for frequencies less than or equal to the last subtraction point, , given by:

 

(32)

 

where . Based on experimentation, Nexxim sets e to 0.05.

Usually, the discretization error bound is much smaller than Tn(w), since the frequency steps in the tabulated data are small. When the discretization error is small, , the total bound follows Tn(w) very closely. As a result, when Tn satisfies (31), the quantity quite often also satisfies (31), that is:

 

(33)

 

In this situation, the test described here has sufficient resolution to detect causality violations with magnitude greater than . If the frequency steps in the tabulated data are not small, can be comparable in magnitude to Tn(w). In this case, can be greater than . When this condition is detected, the best solution is to reduce the maximum frequency step in the tabulated data and repeat the causality test until satisfies (33).

Handling False Positives due to the Discrete Error Bound

Once reasonable values have been found for at all frequencies, the causality test can be run. However, a reliability issue now emerges. When calculated by equation (22), is not strictly an upper bound. As a result, the condition in (28) can be met, flagging the response as noncausal, although the frequency response is really causal. This problem of false positives can be overcome by computing as a strict upper bound, making use of upper bounds for the integration error in Simpson’s method and the Trapezoidal rule. However, the resulting bounds have been found to be unreliable at times.

Even though the bound computed from (22) may contribute to false positives, the result is more reliable than the one obtained with strict upper bounds. For this reason, Nexxim calculates using equation (22). Nexxim uses a heuristic to differentiate between a false positive due to and a genuine noncausality. A false positive due to usually results in a minor causality violation, while a genuine positive results in a strong causality violation. Nexxim considers any violation less than 0.5% to be a minor violation. A causality violation is computed at wk when the condition is (28) is true at wk. The violation is computed as:

 

(34)

 

If , then a false positive is most likely, and the frequency response is certified to be causal. If on the other hand , the frequency response is declared to be noncausal. With any kind of violation, Nexxim prints the to the console for each entry in the S-matrix.

Enforcing Causality of Touchstone Data

Nexxim can also perform causality enforcement on frequency response data that have been flagged as noncausal. The enforcement method employs fact that the real and imaginary parts of a causal frequency response are Hilbert transforms of each other [see equation (6)].

 

(35)

 

The real part of the response is calculated as the Hilbert transform of the imaginary part:

 

(36)

 

The reconstruction obtained by combining the real and imaginary parts is guaranteed causal. However, accuracy may suffer.

Understanding the Causality Options

Three options control Nexxim causality checking and enforcement. Here are tecnhnical details for the causality options.

causality_check

Setting .option s_element.causality_check=1 activates a causality test to check for non-causality in the (sampled) frequency data. Causality checking uses a formula based on the generalized Hilbert transform of equation (28). A signal is declared causal if it meets the following constraint:

 

(37)

 

By default, s_element.causality_check=0, and no causality checking is performed.

enforce_causality

Setting .option s_element.enforce_causality=1 runs the causality check and attempts to compensate for any noncausality. Compensation replaces the original frequency data with the causal reconstruction estimate. Any cached S-parameter solution is updated with new corresponding causal estimates. Since causality enforcement affects accuracy, fixing the original non-causal data is always a better solution.

causality_checker_tolerance

The .option s_element.causality_checker_tolerance=val sets , the maximum magnitude of error that still satisfies the causality checker. Nexxim sets the default tolerance to be a fraction of the overall tolerance for fitting errors.

Example of Truncation Error Formulas

Here are some graphic examples of the truncation error formulas. NOTE: These internally-computed values are not available for plotting in Designer.

 

Figure 1. Truncation error Tn(w) decreases with the number of subtraction points, n.

In Figure 1, the truncation error bound Tn(w) from (18) is compared over different values of n for a scattering parameter that is known up to 10GHz. For an S-parameter, M=1 and a=0 in (16) and (18). In Figure 1, Tn(w) has a local minimum at each subtraction frequency . Since Tn(w) is shown only for non-negative frequencies, only half the number of subtraction points are plotted.

 

Figure 2. The number of subtraction points is increased to ensure

In Figure 2, Tn(w) is shown for various values of dnc. As dnc is reduced from 0.01 to 0.0001, Nexxim increases the number of subtraction points n from 10 to 22, so that for .

 

 

Examples of Causality Checking and Enforcement

This section show the results of causality checking on a simple resistor, a lossy transmission line, and a known causal W-element from Designer.

Causality Check of a Resistor

The first example is a resistor with resistance R (Figure 3a, 3b). Such a resistor is known to be causal: the impulse response is an impulse at t=0, hence causal. However, confirming this fact in the frequency domain with just the Hilbert transform is problematic due to the non-square-integrable nature of the resistance. Hence, the causality of a resistor is confirmed with the generalized Hilbert transform. Here, the resistance R is fixed as 75W and is converted to the corresponding one-port S-parameter. With a 50-W reference impedance, the S-parameter S(jw) is computed to be 0.2 at all frequencies. Tabulated values of this S-parameter are created for frequencies in the range [0, 10GHz] in steps of 10 MHz. The parameter e is chosen as approximately 0.05. So from (19), . The causality checker tolerance dnc is chosen as 0.002. The quantity n was found to be 14 for Tn(w) to satisfy (31). Because the S-parameter is constant with frequency, the bound . Therefore, satisfies (33).

In Fig. 3, causality reports of the resistor are shown. In Fig. 3a, the computed is compared against . From the figure, it is clear that is smaller than for all frequencies in W’except near the subtraction frequencies (minima in the plot). To get a clear picture even at the subtraction frequencies, the ratio of two quantities is computed. In Fig. 3(b), the ratio of the two quantities is shown as a function of frequency. If the ratio is more than 0 dB at any frequency, then the frequency response has a possible noncausality at that frequency. From Fig. 3(b), it can be concluded that the frequency response satisfies (29) and, hence, is causal.

 

Figure 3. Causality reports on the frequency response of a resistor, dnc = 0.002, n=14.

Causality Check of a Lossy Transmission Line

The S-parameters of a lossy transmission line are computed from the closed-form expressions of the frequency-dependent per-unit-length (p.u.l.) parameters R (resistance), L (inductance), G (conductance), and C (capacitance). Refer to Figures 4, 5, 6, and 7.

Let Rdc and Gdc be the p.u.l. resistance and conductance, respectively, at DC. Further, let Rs and Gd be real-values constants such that and . Let the inductance
L(f ) = Lext, where Lext is the p.u.l. external inductance at , and let C(f) = C0, where C0 is the constant p.u.l. capacitance.

S-parameters computed from such p.u.l. parameters are expected to be noncausal, since the corresponding p.u.l. impedance (R + jwL) and admittance (G + jwC) are not causal frequency responses. However, when Rs = 0 and Gd = 0, then the S-parameters are expected to be causal.

Cases 1 through 4 (Figures 4, 5, 6, and 7, respectively) illustrate how causality results depend on the values of Rs and Gd.

1. Rs = 0 and Gd = 0. Refer to Figure 4. The transmission line still has loss due to the nonzero Rdc and Gdc. S-parameters were computed from tabulated frequencies in the range [0, 2GHz]. The causality checker tolerance dnc is 0.002 and n is 14, as in the resistor example. Figures 4a and 4b indicate that the transmission line is causal under these conditions.

2. Rs = 0 and . Refer to Figure 5. The simulation parameters are the same as Case 1, except that Gd is made nonzero. Figures 5a and 5b show the resulting causality violation at low frequencies.

3. and Gd = 0. Refer to Figure 6. The parameters are the same as in Case 1, except that Rs is made nonzero. Figures 6a and 6b show the resulting causality violation over half the frequencies.

4. and . Refer to Figure 7. The parameters are the same as in Case 1, except that both Gd and Rs are made nonzero. Figures 7a and 7b show the resulting causality violation, similar to the result in Case 3.

Causality Check on a Causal Model W-Element Transmission Line

We can compare the causality check results of the lossy transmission line S-parameters with those obtained using Nexxim’s causal W-element RLGC model. In this example, the RLGC values are comparable to the ones used for the lossy transmission line. The causality reports are shown in Figures 8a and 8b. As can be seen from the figure, the new S-parameters are causal.

 

Figure 4. Causality reports on the frequency response of a transmission line with Rs =0 and Gd = 0, dnc = 0.002, n=14.

 

Figure 5. Causality reports on the frequency response of a transmission line with Rs =0 and Gd <> 0, dnc = 0.002, n=14.

 

Figure 6. Causality reports on the frequency response of a transmission line with Rs <> 0 and Gd = 0, dnc = 0.002, n=14.

 

Figure 7. Causality reports on the frequency response of a transmission line with Rs <> 0 and Gd <> 0, dnc = 0.002, n=14.

 

Figure 8. Causality reports on S12 of Nexxim’s causal W-element transmission line with Rs <> 0 and Gd <> 0, dnc = 0.002, n=14.

Passivity Checking and Enforcement

A passive system does not contain any energy sources.

Nexxim offers three methods for passivity checking and enforcement: an iterated fitting method, a convex programming method, and a perturbation method. See Passivity References in the S-Parameter References for references mentioned in this topic.

Iterated Fitting Method

Iterated Fitting of Passivity Violations (IFPV) is the latest passivity enforcement algorithm provided by Nexxim (s_element.passivity_enforcement=7). IFPV requires relatively little memory to run, and runs significantly faster than either convex optimization or passivity by perturbation. However, IFPV does not guarantee goodness-of-fit as does the convex optimization method. The IFPV algorithm works by taking the results of the original state-space fit, selecting those areas that have passivity violations, and fitting just the excess values, with the original poles. The fitted residues are then subtracted from the original state-space fit. This step is iterated, balancing between goodness of fit and passivity, until a passive model is achieved.

Convex Programming Method

The convex programming method (s_element.passivity_enforcement=1) enforces passivity by constraining the transfer function to be positive real (Passivity Reference [1]).

By the Bounded Real Lemma, the state space system ABCD is passive if a positive definite matrix K>0 exists such that:

 

 

 

[spacer]

 

 

 

 

The convex programming method preserves the poles from the original state state formulation (state matrix A). The formulation is such that the estimate of matrix B is not needed. For a fixed matrix A, the method recomputes matrices C and D to minimize errors while observing the Bounded Real Lemma.

The convex method of passivity enforcement is guaranteed to produce a passive state space realization, but is very slow and memory-intensive. Convex optimization requires computing the Hessian or second derivative matrix, with dimension (P+Q)2 by (P+Q)2, where P+Q is the total number of input and output ports. The total memory requirement for the matrices is (P+Q)4. The compute time to solve the matrices is (P+Q)6. In practice, the exponential growth of the solution size limits the convex programming method to systems with the total number of ports (P+Q) less than or equal to 10.

Perturbation Method

The passivity by perturbation method is selected with (s_element.passivity_enforcement=6). With this method, passivity enforcement of scattering parameter data with larger numbers of ports is achieved by the perturbation of the singular values of the transfer function matrix.

The frequency-domain transfer function matrix H(jw):

 

[spacer]

 

 

is obtained from the transfer matrix fit to the S-parameter data S(jw).

The singular values of the transfer matrix s[H(jw)] are the square roots of the eigenvalues of [HH(jw)H(jw)], where HH(jw) denotes the complex conjugate transpose of H(jw).

For passivity, the singular values of the transfer matrix must be less than or equal to 1 for all frequencies.

 

[spacer]

 

 

The perturbation method is derived from the eigenvalue (modal) perturbation of [HH(jw)H(jw)], equivalent to the perturbation of the singular values of H(jw). Higher order terms are ignored. As a first step, the D matrix is passified to guarantee asymptotic passivity. Next, linear perturbation is applied iteratively to the C matrix, minimizing the norm of the perturbation and the transfer matrix deviation at each iteration.

Generally, a nonpassive singular value exceeds the threshold only over a range of frequencies. The method uses the slope of the curve at the points where the singular value crosses the threshold to determine the direction of the perturbation to be applied.

The perturbation method generates matrices of approximate size k * N* (P+Q), where N is the number of states and k is the number of singular values that are greater than 1. Thus, this method is applicable to systems with large numbers of ports.

Convolution Method

By default, Nexxim transient analysis of S-parameter elements uses a state space formulation.

Setting option s_element.convolution to 1, 2, or 3 sets Nexxim to use convolution rather than state-space matrices to model the behavior of S-parameter elements during transient analysis.

The convolution method converts the frequency-domain parameters into time-domain impulse responses. The impulse responses are directly applied in transient simulation via the convolution integral.

Upon completion, Nexxim reports the error in the least-squares convolution fit. Errors greater than 0.1 suggest non-causality in the S-parameter data. Convolution may also yield inaccurate results without any warning. Several modifications to the data may improve the quality of the convolution response:

• Provide linearly spaced frequency points in the S-parameter file.

• Include DC in the S-parameter file.

• Provide as large a frequency range as possible.

• Avoid sharper-than-necessary input waveforms in the netlist.

The different convolution options control the generation of the impulse response. Convolution=1 is recommended. Convolution=2 and 3 are for advanced users.

• convolution=1: The impulse response is a piecewise linear waveform, with breakpoints chosen to exactly match the given frequency-domain data. This setting gives the most accurate response, but may have passivity violations just above the frequency band of interest.

When convolution=1 has been enabled, setting the S-element parameter DELAYHANDLE=1 enables convolution to incorporate propagation delay for that element.

Note 

The HSPICE S-Model parameter DELAYHANDLE=1 has the same effect as option convolution =1.

• convolution=2: The impulse response is a train of impulses in the time domain, given by the inverse Fast Fourier Transform. This setting yields an impulse response that is accurate in-band and usually passive. However, the transient waveforms may have discontinuity effects.

• convolution=3: The impulse response is the linear interpolation of the inverse FFT results. This setting yields an impulse response with a low probability of passivity violations, but there is significant filtering towards the top of the frequency range of the input data.

The convolution=1, 2, 3 options apply the convolution method globally to all S-parameter models in the design. The convolution option can be overridden in a particular model by setting model parameter CONVOLUTION. The setting of a CONVOLUTION model parameter overrides the global option convolution setting for S-element instances that reference the given model.




HFSS视频教学培训教程 ADS2011视频培训教程 CST微波工作室教程 Ansoft Designer 教程

                HFSS视频教程                                      ADS视频教程                               CST视频教程                           Ansoft Designer 中文教程


 

      Copyright © 2006 - 2013   微波EDA网, All Rights Reserved    业务联系:mweda@163.com