Monday, June 16, 2014

History of finite analysis

Finite Element Analysis (FEA) was first developed in 1943 by R. Courant, who utilized the Ritz method of numerical analysis and minimization of variational calculus to obtain approximate solutions to vibration systems.

By the early 70's, FEA was limited to expensive mainframe computers generally owned by the aeronautics, automotive, defense, and nuclear industries. Since the rapid decline in the cost of computers and the phenomenal increase in computing power, FEA has been developed to an incredible precision. Present day supercomputers are now able to produce accurate results for all kinds of parameters.

The finite element method obtained its real impetus in the 1960s and 1970s by the developments of J. H. Argyris with co-workers at the University of StuttgartR. W. Clough with co-workers at UC BerkeleyO. C. Zienkiewicz with co-workers at the University of SwanseaPhilippe G. Ciarlet at the University of Paris 6 and Richard Gallagher with co-workers at Cornell University. Further impetus was provided in these years by available open source finite element software programs. NASA sponsored the original version of NASTRAN, and UC Berkeley made the finite element program SAP IV widely available. A rigorous mathematical basis to the finite element method was provided in 1973 with the publication by Strang and Fix. The method has since been generalized for the numerical modeling of physical systems in a wide variety of engineering disciplines, e.g., electromagnetismheat transfer, and fluid dynamics. Newton also worked in the finite analysis

Wednesday, June 11, 2014

Quadratic forms

Given a vector space V, we will say a function φ: V->r is a quadractic form if:


φ (x) = μ belonging to R

 φ (ax)= a^2 x
φ (x+y) + φ (x-y) = 2φ(x) + 2φ(y)


The matrix form is formed by φ (x)= a11 (x1)^2 + a22 (x2)^2 + a33 (x3)^2 + 2a12 x1x2 + a13 x1x3 + 2a23 x2x3. The vector x is placed in a row, then the matrix is formed by placed a11, a22 y a33 in the diagonal and the rest of them is dividing a12, a13 and a23 by two and put the in their symmetric place; finally, the vector x is placed again as a column matrix.

The classification os a quadratic form is:
  • Positive: When φ(x) > 0
  • Semidefined positive: When φ(x) ≥ 0
  • Semidefined negative: When φ(x) ≤ 0
  • Defined negative: When φ(x) < 0
  • Undefined: When φ(x) = ?
Basically, when you calculate the diagonal matrix of a quadratic form, you see the values of the eigenvalues. If all of them are positive, it's positive. When they are positive and at least one of them is zero, semidefined positive. The same thing with the negative and semidefined but with negative values. If none of these condictions aren't showed, then it's undefined. The quadratic forms are the best way to calculate the positive property of a scalar product, where you need to have a defined positive.

Saturday, June 7, 2014

Bilineal forms

Given a vector space V, we will say a function of  f: VxV-> R is bilineal if

f(x,y) = μ  belonging to R and being x,y two vectors.

In order to be a bilineal form, it has to be linear in both positions.

Lineal to the left : f(ax+by,z) = af(x,z) + bf(y,z)
Lineal to the right : f(x, ay+bz)=af(x,y) + b(x,z)

There are two types of bilineal forms:

  • Symmetrical: f(x,y) = f(y,x)
  • Antisymmetrical: f(x,y) = -f(y,x)
Bilinear forms are used to calculate a real number using the image of two vectors. A particular case of this operation is when the image equals the two vectors multiplied by one (eigenvalue = 1), meaning that we have an euclidean space and therefor, a scalar product. The matrix form works similar to the scalar product. You multiply the x vector as a row, the matrix is the image of the combination of the base (e1,e1; e2,e1; e3,e1; e2,e2; e2,e3; e3,e3) and finally the vector is placed as a column. This matrix form of the bilineal function is called "Gramm Matrix".
In this link you can find more information about bilineal forms.