One of the most powerful mathematical methods I know of uses an operator and its inverse. Briefly and in English, you perform some operation, do something useful with the result, then do the inverse operation.

An example is changing the representation of a colour from RGB, the brightness of the red, green and blue components, to IHS, which is intensity, hue and saturation. Changing the hue is rarely useful, but changing the intensity — overall brightness — is common. Very useful is changing the saturation, which can make a washed out image into one in which the colours are very bright. But few monitors have inputs for IHS images, so it is common to perform the inverse operation, turning converting the adjusted IHS image back to RGB.

It is worth noting that neither of these operations is a linear one. Hue has a circular topology, given by a second order equation, so that making something more blue will eventually make it red again. Saturation depends on only the brightest two bands, R and G, G and B, or R and B, not all three. Except where the bands are of equal intensity, to increase saturation, reduce the intensity of the dimmest band. That is a very non-linear aspect indeed, since as the intensity of a given band drops below that of the other two, it is suddenly irrelevant.

Yet the inverse transform takes this non-linear description of colour into a simple linear one in which the intensity of each band is equally important and increasing any one of them has a simple linear effect on colour. This inverse operation, taking IHS into RGB is an interesting example of linearization, to be discussed in more detail later.

The most common operations used in disciplines such as the social sciences are linear throughout and consist only of operations on vectors in an abstract linear space, as defined by matrices.

Of these linear operators, the most useful are those which take a set of vectors describing something and rotate them so that they are given as sums of a set of orthogonal basic vectors. Factor analysis can be used to find a special set of basis vectors, ideally suited to the problem at hand, but it has been shown that the fourier transform is nearly as good almost all cases.

The fact that the basis vectors are orthogonal means roughly the same as saying they are statistically independent, or uncorrelated. Thus you can operate on any one of them or any subset of them without affecting the others.

A typical use of is transform is in cleaning up recorded sounds. Once transformed, the resulting spectral representation clearly shows problems with the recording and allows them to be fixed. Then the inverse transform is used to turn the spectral representation into a waveform. Interestingly enough, except for a multiplicative factor, the fourier transform is its own inverse. The spectrum of a waveform is (almost always) nothing like the waveform, as you could hear if you played it through an audio device. But you would immediately recognize the spectrum of the spectrum, if played the same way.

The transforms used in factor analysis and the fourier transforms are linear ones, within a vector space. As such they have exact inverses. But they also have pseudo-inverses. These have a different dimensionality than the exact inverses. The exact inverse of something in an 13 dimensional linear vector space would also be described in 13 component vectors. But using a transform called the singular value decomposition or SVD, these vectors can be ordered precisely in importance. The importance of a vector is given by a number called a singular value. Components with a small singular value are more or less noise.

In reducing the linear space from 13 dimensions down to 10, for example, one might be throwing away 3 dimensions of noise. The pseudo-inverse of this operation would take something described by 10 component vectors and reconstruct 13 component ones, which are the best approximation possible given the lower dimensionality of the cleaned up data.

A pseudo-inverse is an example of what in category theory is called an adjoint functor.

The operation of throwing away three dimensions of noise would be an example of a kind of forgetful functor, which forgets some aspect of the input. The pseudo-inverse is an example of the adjoint of a forgetful functor.

Most of category theory involves the application of theorems from one branch of mathematics to those of another. But category theory has many applications in the real world, and especially in the applications of mathematics in understanding and changing the world. Social Mathematics.

The most useful of all realworld applications of the Functor Operation Adjoint Functor, sequence is that in which the result of applying the first functor is to produce something as much like a set of linear basis vectors as possible. That is not hard for many of the obvious applications of social mathematics, but is quite the opposite of the usual approach.

In the social sciences, one usually seeks correlations. Basis vectors in a linear vector space are orthogonal. The statistical correlative of orthogonal is uncorrelated. Thus in seeking basis vectors, one should not at first look for correlations. One should look for uncorrelated variables. The larger the collection of uncorrelated variables and the evidence that they are indeed uncorrelated, the better.

This is not entirely unlike the methods of factor analysis, but that procedure does not seek a large enough collection of variables, nor attempt to linearize them. Factor analysis of data collected in a large social survey will not be very useful unless correlations with economic and political factors are considered. Economic factors are often cyclic, and must be linearized to be useful, using methods based on the fourier transform. There are literally hundreds of realworld variables which should be added to the vector space under consideration when doing factor analysis on survey data.

For more details on linearization and other aspects of this problem, please keep an eye on the new linear profiles site, still very much a work in progress.