by Gaffke, N, Graßhoff, U., Schwabe, R..

**Series:** 2012-08, Preprints

- MSC:
- 62K05 Optimal designs

**Abstract:**

The basic structure of classical algorithms for numerical computation of optimal approximate

linear regression designs are briefly summarized. The algorithms are applicable

if the convex and compact set of information matrices is simple enough such that linear

functions can easily be minimized over that set. This limitation can be weakened to “fractional

minimization” of linear functions. In case of multiple first order regression on the

cube $[−1, 1]^k$ with heteroscedasticity caused by random coefficients with unknown means

but known dispersion matrix, such fractional linear minimization turns out to be nonconvex

quadratic minimization over the cube, which is known to be NP-hard. So applications

are restricted to small dimensions $k \leq 10$. In the special case that the dispersion matrix

of the random coefficients is diagonal, equivariance of the linear regression model allows

restriction to invariant designs, and the algorithms become applicable for any dimension $k$.

**Keywords:**

Information matrix; D- and I-optimality; Quasi-Newton method; nonconvex quadratic minimization; invariant design

**This paper was published in:**

Submitted to CSDA, Special Issue on Algorithms for Design of Experiments, May 29, 2012