Badly Formed Macro Assignment Makefile

MAKEFILE tutorial, copied from the original site in Spain

Much of the information below is just pasted from the man pages.
 

 

The purpose of the make utility is to determine automatically which pieces of a large program need to be recompiled, and issue the commands to recompile them. To prepare to use make, you must write a file called the makefile that describes the relationships among files in  your program, and the states the commands for updating each file.  In a program, typically the executable file is  updated from object files, which are in turn made by compiling source files.

Once a suitable makefile exists, each time you change some  source files, this simple shell command:

               make

suffices to perform all necessary recompilations.  The make program uses the makefile data base and the last- modification times of the files to decide which of the files need to be updated.  For each of those files, it issues the commands recorded in the data base.

make executes commands in the makefile to update one or more target names, where name is typically a program. Normally you should call your makefile either makefile or Makefile.  (We recommend Makefile because it appears prominently near the beginning of a directory listing, right near other important files such as README.)

The followings are the relevant options of make. They must be given in the command line and you must be sure that you control all the options that are beins issued (environment variable MAKEFLAGS can be set to some of them and you might not notice it).

          -f file
               Use file as a makefile.

           -i
                Ignore all errors in commands executed to remake files.

          -I dir
               Specifies a directory dir to search for included makefiles.  If several -I options are used to specify several directories, the directories are searched in the order specified.  Unlike the arguments to other flags of make, directories given with -I flags may come directly after the flag: -Idir is allowed, as well as -I dir.  This syntax is allowed for compatibility with the C preprocessor's -I flag.

          -n
                Print the commands that would be executed, but do not execute them.

          -p
                Print the data base (rules and variable values) that results from reading the makefiles; then execute as  usual or as otherwise specified.  This also prints the version information given by the -v switch (see below). To print the data base without trying to remake any files, use make -p -f/dev/null.

           -s
                Silent operation; do not print the commands as they are executed.
 
           -W file
               Pretend that the target file has just been modified. When used with the -n flag, this shows you what would happen if you were to modify that file.  Without -n, it is almost the same as running a touch command on the given file before running make, except that the modification time is changed only in the imagination of make.
 
 
 
The value of the SHELL environment variable will not be used as a macro and will not be modified by defining the SHELL macro in a makefile or on the command line. All other environment variables, including those with null values, are used as macros, as defined in the "Macros" section.
 

 

The rules in makefiles consist of the following types of lines: target rules, including special targets (see Target Rules);  inference rules (see Inference Rules); macro definitions (see Macros); empty lines; and comments.  Comments start with a number sign (#) and continue until an unescaped newline character is reached.

When an escaped newline character (one preceded by a backslash) is found anywhere in the makefile, it is replaced, along with any leading white space on the following line, with a single space character.

Command lines are processed one at a time by writing the command line to the standard output.  Commands will be executed by passing the command line to the command interpreter.

Command lines can have one or more of the following prefixes: a hyphen (-), an at sign (@), or a plus sign (+). These modify the way in which make processes the command. When a command is written to standard output, the prefix is not included in the output.

     -     If the command prefix contains a hyphen,  any error found while executing the command will be ignored.
     @   If the command prefix contains an at sign, the command will not be written to standard output before it is executed.
     +     If the command prefix contains a plus sign, this indicates a command line that will be executed even if -n, -q or -t is specified.

If the string include or sinclude appears at the beginning of a line in a makefile, and is followed by a blank or a tab, the rest of the line is  assumed to be a filename and will be read by the current invocation,  after substituting for any macros.  For include it is a fatal error if the file is not readable, for sinclude a non-readable file is silently ignored.

Target rules are specified by the user in a makefile for a particular target.  Inference rules are user- or make -specified rules for a particular class of target names. Explicit prerequisites are those prerequisites specified in a makefile on target lines. Implicit prerequisites are those prerequisites that are generated when inference rules are used.  Inference rules are applied to implicit prerequisites or to explicit prerequisites that do not have target rules defined for them in the makefile. Target rules are applied to targets specified in the makefile.

Target rules are formatted as follows:

          target [target...]: [prerequisite...][; command ]
          [<tab> command
          <tab> command
          ...]
          line that does not begin with <tab>

Special targets:
     .DEFAULT     If the makefile uses this special target, it must be  specified with commands, but without prerequisites. The  commands will be used by make if there are no other rules available to build a target.

     .IGNORE      Prerequisites of this special target are targets themselves; this will cause errors from commands associated with them to be ignored in the same manner as specified by the -i option. Subsequent occurrences of .IGNORE add to the list of targets ignoring command errors. If no prerequisites are specified, make will behave as if the -i option had been specified and errors from all commands associated with all targets will be ignored.

     .SILENT      Prerequisites of this special target are targets themselves; this causes commands associated with them to not be written to the standard output before they are executed. Subsequent  occurrences of .SILENT add to the list of targets with  silent commands. If no prerequisites are specified, make  will behave as if the -s option had been specified and no commands or touch messages associated with any target will be written to standard output.

     .SUFFIXES    Prerequisites of .SUFFIXES are appended to the list of known suffixes and are used in conjunction with the inference  rules (see "Inference Rules" ). If .SUFFIXES does not have any prerequisites, the list of known suffixes will be cleared. Makefiles must not associate commands with .SUFFIXES.

Macro definitions are in the form:

          string1 = [ string2 ]

The macro named string1 is defined as having the value of string2, where  string2 is defined as all characters, if any, after the equal sign, up to a comment character (#) or an unescaped newline character. Any blank characters immediately before or after the equal sign will be ignored.

Subsequent appearances of $(string1) or ${string1} are replaced by string2. The parentheses or braces are optional if string1 is a single character. The macro $$ is replaced by the single character $.

Macros can appear anywhere in the makefile. Macros in target lines will be evaluated when the target line is read. Macros in command lines will be evaluated when the command is executed. Macros in macro definition lines will not be evaluated until the new macro being defined is used in a rule or command. A macro that has not been defined will evaluate to a null string without causing any error condition.

Macro assignments will be accepted from the sources listed below, in the order shown. If a macro name already exists at the time it is being processed, the newer definition will replace the existing definition.

     1.  Macros defined in make's built-in inference rules.
     2.  The contents of the environment, including the variables with null values, in the order defined in the environment.
     3.  Macros defined in the makefiles, processed in the order specified.
     4.  Macros specified on the command line. It is unspecified whether the internal macros defined in Internal Macros are accepted from the command line.

Inference rules are formatted as follows:

          target:
          <tab>command
          [<tab>command ]
          ...
          line that does not begin with <tab> or #

The target portion must be a valid target name (see "Target Rules") of the form .s2 or .s1.s2 (where .s1 and .s2 are suffixes that have been  given as prerequisites of the .SUFFIXES special target and s1 and s2 do not contain any slashes or periods.) If there is only one period in the target, it is a single-suffix inference rule. Targets with two periods are double-suffix inference rules.  Inference rules can have only one target before the colon.

The makefile must not specify prerequisites for inference rules; no characters other than white space can follow the colon in the first line, except when creating the empty rule, described below.  Prerequisites are inferred, as described below.

The make utility uses the suffixes of targets and their prerequisites to infer how a target can be made up-to-date. A list of inference rules defines the commands to be executed. By default, make contains a built-in set of inference rules.  Additional rules can be specified in the makefile.

The special target .SUFFIXES contains as its prerequisites a list of  suffixes that are to be used by the inference rules. The order in which the suffixes are specified defines the order in which the inference rules for the suffixes are used. New suffixes will be appended to the current list by specifying a .SUFFIXES special target in the makefile. A .SUFFIXES target with no prerequisites will clear the list of suffixes. An empty .SUFFIXES target followed by a new .SUFFIXES list is required to change the order of the suffixes.

Normally, the user would provide an inference rule for each suffix. The inference rule to update a target with a suffix .s1 from a prerequisite with a suffix .s2 is specified as a target .s2.s1. The internal macros provide the means to specify general inference rules. (See Internal Macros)

When no target rule is found to update a target, the inference rules are checked. The suffix of the target (.s1) to be built is compared to the list of suffixes specified by the .s1 suffix is found in .SUFFIXES, the inference rules are searched in the order defined for the first .s2.s1 rule whose prerequisite file ($*.s2) exists. If the target is out-of-date with respect to this prerequisite, the commands for that inference rule are executed.

If the target to be built does not contain a suffix and there is no rule for the target, the single suffix inference rules will be checked. The single-suffix inference rules define how to build a target if a file is found with a name that matches the target name with one of the single suffixes appended. A rule with one suffix .s2 is the definition of how to build target from target.s2. The other suffix (.s1) is treated as null.
 

If a target or prerequisite contains parentheses, it will be treated as a member of an archive library. For the lib(member.o) expression lib refers to the name of the archive library and member.o to the member name. The member must be an object file with the .o suffix. The modification time of the expression is the modification time for the member as kept in the archive library (See ar). The .a suffix refers to an archive library. The .s2.a rule is used to update a member in the library from a file with a suffix .s2.

     $@      The $@ evaluates to the full target name of the current target, or the archive filename part of a library archive target. It is evaluated for both target and inference rules.  For example, in the .c.a inference rule, $@ represents the out-of-date .a file to be built. Similarly, in a makefile target rule to build lib.a from file.c, $@ represents the out-of-date lib.a.

     $$@     The $$@ macro stands for the full target name of the current target (which is $@).  It has meaning only on the dependency line in a makefile.  Thus, in the following:

                 cat dd: $$@.c

 the dependency is translated at execution time first to the string cat.c, then to the string dd.c.

     $%      The $% macro is evaluated only when the current target is an archive library member of the form libname(member.o). In these cases, $@ evaluates to libname and $% evaluates to member.o. The $% macro is evaluated for both target and inference rules. For example, in a makefile target rule to build lib.a(file.o), $% represents file.o as opposed to $@, which represents lib.a.

     $?      The $? macro evaluates to the list of prerequisites that are newer than the current target. It is evaluated for both target  and inference rules. For example, in a makefile target rule to build prog from file1.o, file2.o and file3.o, and where prog is not out of date  with respect to file1.o, but is out of date with respect to file2.o and file3.o, $? represents file2.o and file3.o.

     $<      In an inference rule, $< evaluates to the file name whose existence allowed the inference rule to be chosen for the target. In the .DEFAULT rule, the $< macro evaluates to the current target name. The $< macro is evaluated only for inference rules. For example, in the .c.a inference rule, $< represents the prerequisite .c file.

     $*      The $* macro evaluates to the current target name with its suffix deleted. It is evaluated at least for inference rules. For example, in the .c.a inference rule, $*.o represents the out-of-date .o file that corresponds to the prerequisite .c file.

Each of the internal macros has an alternative form. When an upper-case D or F is appended to any of the macros, the meaning is changed to the directory part for D and filename part for F. The directory part is the path prefix of the file without a trailing slash; for the current directory, the directory part is ``.''. When the $? macro contains more than one prerequisite filename, the $(?D) and $(?F) (or ${?D} and ${?F}) macros expand to a list of directory name parts and filename parts respectively.

For the target lib(member.o) and the s2.a rule, the internal macros are defined as:

     $<      member.s2
     $*      member
     $@      lib
     $?      member.s2
     $%      member.o

      .SUFFIXES: .o .c .y l .a .sh .f .c~ .y~ .l~ .sh~ .f~

     MAKE=make
     AR=ar
     ARFLAGS=-rv
     LFLAGS= LDFLAGS=
     CC=c89
     CFLAGS=-O
     FC=fort77
     FFLAGS=-O 1

      .c:
               $(CC) $(CFLAGS) $(LDFLAGS) -o $@ $<
      .f:
               $(FC) $(FFLAGS) $(LDFLAGS) -o $@ $<
      .c.o:
               $(CC) $(CFLAGS) -c $<
      .f.o:
               $(FC) $(FFLAGS) -c $<
      .c.a:
               $(CC) -c $(CFLAGS) $<
               $(AR) $(ARFLAGS) $@ $*.o
               rm -f $*.o
      .f.a:
               $(FC) -c $(FFLAGS) $<
               $(AR) $(ARFLAGS) $@ $*.o
               rm -f $*.o

1. Macros used within other macros are evaluated when the new macro is used rather than when the new macro is defined. Therefore:

          MACRO = value1
          NEW   = $(MACRO)
          MACRO = value2

          target:
                 echo $(NEW)

would produce value2 and not value1 since NEW was not expanded until it was needed in the echo command line.

2. For inference rules, the description of $< and $? seem similar.  However, an example shows the minor difference. In a makefile containing:

          foo.o: foo.h

if foo.h is newer than foo.o, yet foo.c is older than foo.o, the built-in rule to make foo.o from foo.c will be used, with $< equal to foo.c and $? equal to foo.h. If foo.c is also newer than foo.o, $< is equal to foo.c and $? is equal to foo.h foo.c.

1.  The following command makes the first target found in the makefile.
              make
 
2.  The following command makes the target junk.
             make junk

3.  The following makefile says that pgm depends on two files, a.o and b.o, and that they in turn depend on their corresponding source files (a.c and b.c), and a common file incl.h:
         pgm: a.o b.o
                 c89 a.o b.o -o pgm
         a.o: incl.h a.c
                 c89 -c a.c
         b.o: incl.h b.c
                 c89 -c b.c

4.  An example for making optimised .o files from .c files is:
          .c.o:
                 c89 -c -O $*.c
         or:
          .c.o:
                 c89 -c -O $<

5.  The most common use of the archive interface follows.  Here, it is assumed that the source files are all C-language source:
         lib:   lib(file1.o) lib(file2.o) lib(file3.o)
                @echo lib is now up-to-date

The .c.a rule is used to make file1.o, file2.o and file3.o and insert them into lib.

6. If $? were:  /usr/include/stdio.h /usr/include/unistd.h foo.h
then $(?D) would be: /usr/include /usr/include .
and $(?F) would be:  stdio.h unistd.h foo.h
 

 

Makedepend reads each sourcefile in sequence and parses it like a C-preprocessor, processing all #include, #define, #undef, #ifdef, #ifndef, #endif, #if and #else directives so that it can correctly tell which #include, directives would be used in a compilation.  Any #include, directives can reference files having other #include directives, and parsing will occur in these files as well.

Every file that a sourcefile includes, directly or indirectly, is what makedepend calls a "dependency".  These dependencies are then written to a makefile in such a way that make(1) will know which object files must be recompiled when a dependency has changed.

By default, makedepend places its output in the file named makefile if it exists, otherwise Makefile. An alternate makefile may be specified with the -f option.  It first searches the makefile for the line

              # DO NOT DELETE THIS LINE -- make depend depends on it.

or one provided with the -s option, as a delimiter for the dependency output.  If it finds it, it will delete everything following this to the end of the makefile and put the output after this line.  If it doesn't find it, the program will append the string to the end of the makefile and place the output following that.  For each sourcefile appearing on the command line, makedepend puts lines in the makefile of the form

               sourcefile.o: dfile ...

Where "sourcefile.o" is the name from the command line with its suffix replaced with ".o", and "dfile" is a dependency discovered in a #include directive while parsing sourcefile or one of the files it included.

Example: inside Makefile
              SOURCES = file1.c file2.c ...
              CFLAGS = -DHACK -I../includes
              depend:
                      makedepend  $(CFLAGS)  $(SOURCES)
 

 

          gcc [ option | filename ]...

Linking is performed by a call to the compiler but this operation implies another executable file, the link editor, usually called 'ld' and its purpose is to combine several Elf Object Files, ie, to link.

 -c   Compile or assemble the source files, but do not  link.
 -S   Stop after the stage of compilation proper; do not  assemble.
 -o file
        Place output in file file.  This applies regardless  to whatever sort of output GCC is producing, whether it be an executable file, an object file, an  assembler  file or preprocessed C code. If you do not specify `-o', the default is  to  put  an executable   file  in  `a.out',  the  object  file  for `source.suffix' in `source.o', its  assembler  file  in `source.s',  and  all preprocessed C source on standard output.

 -ansi  Support all ANSI standard C programs.

 -include file
         Process file as input before processing the regular input file.
 -Dmacro
         Define  macro. It is the same as if inside the .c file we would declare #define macro.
 -Dmacro=defn
         Define macro macro as defn.

These options come into play when the compiler links  object files  into an executable output file.  They are meaningless if the compiler is not doing a link step.

object-file-name
           A file name that does not end in a  special  recognized  suffix is considered to name an object file or library.  (Object files are distinguished from libraries  by  the  linker  according to the file contents.)  If GCC does a  link step, these object files are used as input to  the  linker.

-llibrary
            Use the library named library when linking.  The  linker searches a standard list of directories for   the  library,  which   is   actually   a   file   named  `liblibrary.a'.   The  linker then uses this file as if it had been specified precisely by name. The directories searched include several standard  system directories plus any that you specify with `-L'.

-Wl,option
               Pass option as an option to the linker.  If option contains commas, it is split into multiple options at  the commas.

-Idir
           Append   directory  dir  to  the  list  of  directories searched for include files.
-Ldir
           Add  directory  dir  to  the  list of directories to be searched for `-l'.
-Bprefix
           This option specifies where to  find  the  executables, libraries and data files of the compiler itself.

-w
           Inhibit all warning messages.
-pedantic
           Issue all the warnings demanded by strict ANSI standard C; reject all programs that use forbidden extensions.
-W
           Print extra warning messages for strange events such as possible changes in variables after long jumps, a function which might not return a value, possible castings, ...
-Wall
            Print warnings for a large amount of possible error conditions.

-g
            Produce debugging information in the operating system's native  format (stabs, COFF, XCOFF, or DWARF).  GDB can work with this debugging information. This option activates debugging at all levels, but there are also options for accomplishing a fixed level of debugging or a special feature.

-O
            Produce optimized code for the program. There are two more levels of optimization (-O2 and -O3), but usually this basic level is enough. As in debugging control can be applied to many little features.

/usr/include                                                            --> a "boom mix" of headers affecting many kinds of libraries, included C standard
/usr/local/contrib/?????/include                        --> each package installed in bossa-nova has got its includes in this directory
 

 

Ld, the link editor, links Elf object files. The archive format ld is the one created by the archiver ar(1).

 ld is normally invoked by cc(1), although it can be run separately.  When ld is used as part of a cc compilation, the ld options must be passed via the -Wl mechanism.  See cc(1) for details of -Wl.

The ld command combines several object files into one, performs relocation, resolves external symbols, builds tables and relocation information for run-time linkage in case of doing shared link, and supports symbol table information for symbolic debugging.  In the simplest case, the names of several object files are given.  ld combines them, producing an object module that can be executed or used as input for a subsequent ld run.  (In the latter case, the -r option must be given to preserve the relocation entries.)  The output of ld is left in a.out.  By default, this file is a dynamic executable if no errors occurred during the load.

There are two kinds of libraries, archives and dynamic shared objects.  When linking with archives, only those routines defining an unresolved external reference are loaded. Shared objects are used only if the output is to be dynamic. In that case, only the name is used for external resolution, no object is included as part of the output object file.  Note that any symbol remaining unresolved are not consider an error when the linkage is to be shared or dynamic.  The library (archive) symbol table (see ar(1)) is a hash table and is searched to resolved external references that can be satisfied by library members.  The ordering of library members is unimportant.

Linking against a dynamic shared object will normally cause that object to be loaded (see rld(1) and dso(5)) whenever the object being created is loaded, thus resolving the symbols supplied by that object.  The loading of a dynamic shared object can be delayed using the -delay_load option. In this case the object is not loaded until a symbol supplied by the object is actually referenced.  Symbols from a delay loaded object do not preempt symbols from other libraries; they are resolved as if the object was last on the link line.

When searching for ucode libraries the default directories searched are /usr/lib/, /lib/ and /usr/local/lib/.  Note that, although archives will be found in /usr/local/lib/, shared objects should not be installed there, as they will not be found by rld(1).  When searching for 64bit libraries the default directories searched are /usr/lib64/, /lib64/ and /usr/local/lib64/.  When searching for n32 libraries the default directories searched are /usr/lib32/, /lib32/ and /usr/local/lib32/.

-o outfile
          Produce an output object file by the name outfile. The name of the default object file is a.out.

-lx
          Search a library libx.{so,a}, where x is a string.  A shared object or an archive is searched when its name is encountered, so the placement of a -l is significant.

-L  dir
          Change the algorithm of searching for libx.{so,a} or libx.b to look in dir before looking in the default directories.  This option is effective only if it precedes the -l options on the command line.

-v
          Set verbose mode.  Print the name of each file as it is processed.

-32
          Specifies that the object to be linked (and the input objects) are to be 32-bit ucode objects.

-n32
          Specifies that the object to be linked (and the input objects) are to be 32-bit n32 objects.

-64
          Specifies that the object to be linked (and the input objects) are to be 64-bit objects.
 

 
     /lib/lib*.so
     /lib/lib*.a
     /usr/lib/lib*.so
     /usr/lib/lib*.a
     /usr/local/lib/lib*.a
 

 





LD_LIBRARY_PATH=/usr/lib:/lib:/usr/local/lib
LD_LIBRARYN32_PATH=/usr/lib32:/lib32:/usr/local/lib32
LD_LIBRARY64_PATH=/usr/lib64:/lib64:/usr/local/lib64
 

 

Imake is used to generate Makefiles from a template, a set of cpp macro functions, and a per-directory input file called an Imakefile.  This allows machine dependencies (such as compiler options, alternate command names, and special make rules) to be kept separate from the descriptions of the various items to be built.

It is not simple to describe and understand, I refer to 'man imake' for more information, imake executable file is in /usr/bin/X11, while configuration, rule generation and basic template are in /usr/lib/X11/config. A look to these last files can give a light on how this Makefile generator works.

In our package XV Makefile is generated using an Imakefile.
 

 

The xmkmf command is the normal way to create a Makefile from an Imakefile shipped with third-party software.

When invoked with no arguments in a directory containing an Imakefile, the imake program is run with arguments appropriate for your system (configured into xmkmf when X was built) and generates a Makefile.

When invoked with the -a option, xmkmf builds the Makefile in the current directory, and then automatically executes ``make Makefiles'' (in case there are subdirectories), ``make includes'', and ``make depend'' for you.  This is the normal way to configure software that is outside the X Consortium build tree.

This command is also used for building Linux packages. Again the executable is in /usr/bin/X11.
 

 


line 359 begins with spaces and not with TAB, it is very important that the rule's bodies begin with TAB.
 

Macro definitions of the fashion (see below) cannot be done, the definition is thought to be recursive. A second variable must be used for the second macro definitions


 

An error has been detected using gcc, according to the man page the syntax is

      gcc [ option | filename ]...

but the line "gcc -I$(INC) -L$(LIB) -lnno -lm strain.cc -o strain" does not work and needs to be replaced by "gcc strain.cc -o strain -I$(INC) -L$(LIB) -lnno -lm", that is to say, the programs first and then the options.

Another source of error comes from the fact that gcc interprets *.c files as plain C code. If you want a program to be interpreted as C++ code you must use an extension for it such as *.cc or *.C.


Pay attention to the correct way of continuing a line, for exmple, in the first 'if' sentence slashes (\) are very important. The following code is OK, but not the second code where a wrong continuation has been used (marked in red), it should be removed if we want the program to be right.  The error generated for this situation is shown after the words "CODE 2:", as it can be seen it is not very useful in order to detect the error.q


































 

This error can mean that the prototype of a function is not known, so an integer returned value is assumed for that function. If the function actually returns a pointer then a type conflict is detected, althoguh the problem is not the type conflict but that the prototype of the function has not been included. You have to check the includes.

Example Code:





 

This error is produced by some compilers when they are recquired to link and not to link at the same time. Other compilers solve this ambiguity by linking. For example look at the following command
cc -O -D_sunOSV -I/volatil/Xmipp/Lib  elimin.c lvq_pak.o lvq_rout.o -lm elimin -L/volatil/Xmipp/Lib





 

When using templates from a different module, say "matrices.cc",you cannot (with the actual implementations of compilers and linkers) compile with the library containing all those routines, instead in your program you must include the whole "matrices.cc" file.


Every time you need to use a plain C routine, compiled as plain C you must declare it as extern "C", if not strange prefixes and suffixes will be added to the routine name and it will not be found in the correspondient library. For heading files it is useful to add at the beginnig of the file the macro





and then the routines are declared inside the header file as


 

CVD is a debugger compiled for SGI, then it can only read code generated by SGI compilers, more or less. If you compile with gcc you are generating GNU code which cannot be read by CVD, instead compile with cc or CC.


Close the execution window, despite the warning telling you that the execution might be wrong.


It is not the same the definition of a matrix as double T[3][3]; or double **T; and then allocating twice. In the second case an extra array of pointers is created besides the arrays for double data, while in the first one this extra array is not.


class img is using another class which has not been defined within this context. Surely an include is missing.


Check that all modules have been compiled with the same compiler and library. A usual symptom is that it says that there are referenced functions which do not exist when you know they do.


When you know that you have set the right paths to see all the includes. Take into account that symbols like ~ are not resolved. For example, these two commands are not the same and the first one will not work.


Libraries to be loaded must be at the end of the command line. The next first line will not work, but the second yes


Also you must specify the libraries from more complexity to less, ie, if a library is making use of symbols of a second library the command line must have the first library in first place and the second one in second place, and the order is very important. Otherwise, some symbols might not be found when linking.


If you are using sscanf(str,"%s %d",another_str,integer_var), you must be sure that integer_var is of type int, if it is short, for instance, the string another_str will not be read properly. sscanf cannot work neither with the STL strings, even if you pass as argument %s, the function str.c_str(). The solution is to use an intermidiate char array to serve as interface between strings and sscanf.


Be careful that the reserve function of vectors in the STL library allocates memory but doesn't call to you initializer fo the object, so it's better to make a call to push_back, or make a manual initialisation of the object.
 

 

From this point you will find several examples from Xmipp, I hope you find them useful.
  1. Xmipp: General Xmipp Makefile, for the whole package
  2. User Libraries: Makefile for a User Defined Library, such as Xmipp/Lib
  3. User Programs: Makefile.std for a User Program, such as Xmipp/Artkbcc
  4. Merging Fortran: How to include Fortran routines in our C programs
  5. Checking Heading Dependencies: usually 'make' does not check dependencies with .h files
  6. Using Qt
  7. Choosing between 32 or 64 bits compilation
  8. Using C++
Carlos Oscar
 

 

The following example is the Complete Xmipp Makefile, some comments will be done among the file. Much of the redundant information (many directories, programs, ...) are removed, here I only want to give an idea of what should be done.
 




























 































 

















 


 









































































 

 

 The Makefile.std for a User Defined Library is like the following.

















 

 

In this example we will see how the final Makefile relates to the original Makefile.std. Remember Makefile.std keeps the particularities of a fixed program while local_defs mantains information of variables in this compilation, these two files are mixed up to yield the final Makefile. The original Makefile.std for artKbcc is, for example. Nothing is really new with this Makefile.
















When local_defs is taken into account the all variable definitions are put in the beginning of the file. Look, from the #### .... ### the two files are identical.






























 

 

I will only indicate the changes from the standard Makefile in order to merge Fortran routines with C. More information about the pass (in and out) of parameters can be taken in .../Xmipp/3Dpack/Radon/*

























 

 

This Makefile is taken from a Qt test program, please, don't pay attention to parameters, I only want to make remarks on the dependencies generated by makedepend. The idea is to take into account, not only changes in source (.c) files, but changes in header files. The first time Makefile is created the file ends in the line that says "DO NOT DELETE THIS LINE ...", after the first call to make (which should be 'make depend') the lines below this point are added by makedepend itself.





























































 

 

This Makefile is the same as the one in the previous example, but this time I want to emphasize on the libraries and options selected for compilation with Qt. All the options are taken from a demo Qt program, this options are not compulsory needed, in fact I would remove several in order to have more or less the same options in all Xmipp programs. I will mark in blue all options that are "optional".






























 

 

Just suppose we want to link a program with 32 or 64 bits. First of all it is very important you have the correct path to the libraries in the variables

LD_LIBRARY_PATH=/usr/lib:/lib:/usr/local/lib
LD_LIBRARYN32_PATH=/usr/lib32:/lib32:/usr/local/lib32
LD_LIBRARY64_PATH=/usr/lib64:/lib64:/usr/local/lib64
 
This is usually done in the .profile file. Now it's time to say with which library we want to compile (remember that things only run with 64 bits in bossa-nova, and not in rumba, indy, b12sg1, ...). Let's start with 32 bits and then point the changes for 64 bits. It is important to say that 32 bits must be assume in compile stage, linking stage and library inclusion!!! That is to say even the library which is referenced must be compiled with 32 bits. For instance, for the Markhan program the Makefile is





























As it can be seen there is no simple way of expressing the 32 bits compilation in both stages (compiling and linking) just because in link phase there is no way of telling flags from Makefile.Xmipp.  DON'T FORGET that ALL libraries implied must be compiled with 32 bits, namely, libm.a and libS62mip.a must have been compiled with 32 bits.

If we wanted to generate a file to run with 64 bits then we only need to substitute the two red labels (-32) in the Makefile with a new label (-64). In theory any C program could run in a 32 bits architecture as well as in a 64 bits one, unless the programmer uses pointers in a 'bad' manner. In pointer arithmetic if p is a pointer to structure, p+1 is a pointer to the following structure and not to the following byte. The compiler will make the work for you of calculating the exact displacement that must be applied to the pointer. However there are programmers who prefer do it by themselves and prefer to do something like this (struct structname *) ((char *) p + sizeof(struct structname)). This kind of instructions change from a 32 bits to a 64 bits architecture as addresses differ in the number of bytes. I don't know if there are any other problem such that a program in 32 bits could not run in 64.
 

 

*** Ver groe.h de ~coss/Xmipp/Lib para hacer luego una descripción de qué es lo que hay que variar para mezclar cosas de C y C++

Crypto++ mostly supports the Solaris operating system. Crypto++ 5.6.3 and below had mediocre support for Solaris and Sun Studio because it was effectively running with enabled, even on i86 platforms. Crypto++ 5.6.4 added first class support for the Solaris Intel platform, and benchmarks for 5.6.4 will run considerably faster than 5.6.3 and below. This wiki page will detail how to compile the Crypto++ library and programs on Solaris, and how to get the most out of the i86 platform. The page will also provide information in the context of SunCC, which is is Oracle's C++ compiler.

The Sun C++ compiler was endowed with GCC-style inline ASM at SunCC 5.10, which is part of Sun Studio 12. Also see GCC-style asm inlining support in Sun Studio 12 compilers. The inline assembly support means there is opportunity to have Crypto++ perform as well on Solaris as it does other Linux and Unix platforms. Crypto++ first offered Sun C++ compiler integration at 5.6.4 with Commit b1df5736a7191eb1.

Be certain you have enough virtual memory before you attempt to compile some of the heavier files, like or . If you don't have enough virtual memory, then you will experience bizarre unexplained failures when running . We were surprised to learn our test machine, Proliant G5 server with 8GB of RAM and 100+ GB of free storage, was running out of virtual memory. Also see Verify there's enough memory and storage to compile a file? on Super User.

Finally, be certain you use the same compiler, the same compiler options, and the same C++ runtimes to build the library and your programs. Do not mix and match them. Also see GNUmakefile | Creating Programs on the Crypto++ wiki. This nuance has caused so many issues over the years we cannot recount them all.

Make and GNUmake

You must use GNU's make to build the library on Solaris. Sun's default make program will produce unexplained errors, and CMake will use random flags.

$ cd cryptopp $ make make: Fatal error: No arguments to build $ make -f GNUmakefile make: Fatal error in reader: GNUmakefile, line 5: Badly formed macro assignment

By default, the GNUmakefile will use whatever the C++ compiler is. To build the library with the default compiler run :

$ gmake -j 4 g++ -DNDEBUG -g2 -O2 -fPIC -march=native -m64 -Wa,--divide -pipe -c cryptlib.cpp g++ -DNDEBUG -g2 -O2 -fPIC -march=native -m64 -Wa,--divide -pipe -c cpu.cpp g++ -DNDEBUG -g2 -O2 -fPIC -march=native -m64 -Wa,--divide -pipe -c integer.cpp ...

If you want to use Sun's C++ compiler, then specify it in . Also see C++ Compiler below for more on the Sun compiler.

$ CXX=/opt/solarisstudio12.4/bin/CC gmake -j 4 /opt/solarisstudio12.4/bin/CC -DNDEBUG -g3 -xO2 -m64 -native -KPIC -template=no%extdef -c cryptlib.cpp /opt/solarisstudio12.4/bin/CC -DNDEBUG -g3 -xO2 -m64 -native -KPIC -template=no%extdef -c cpu.cpp /opt/solarisstudio12.4/bin/CC -DNDEBUG -g3 -xO2 -m64 -native -KPIC -template=no%extdef -c integer.cpp ...

C++ Compiler

Oracle's C++ compiler is known as SunCC in Crypto++. Each version of Sun Studio or Solaris Studio will supply the compiler called . There will probably be a few of them installed if different version of Sun Studio are available:

$ find /opt -name CC | grep bin /opt/developerstudio12.5/bin/CC /opt/solarisstudio12.4/bin/CC /opt/solarisstudio12.3/bin/CC /opt/solstudio12.2/bin/CC

SunCC differs from GCC in a number of ways. The SunCC does not consume , and there's no way to tell which was specified because the compiler does not signal it to the library or program. Preprocessor defines, like , , , , , , and , are simply missing. This detail was the biggest gap to close when providing better SunCC support.

As of this writing, and are almost incompatible with when uses and above (i.e., or ). See C++03 and C++11 below for more details.

To compile Crypto++ to the equivalent of you will need to supply the preprocessor macros. Additionally, you may need to use depending on the compiler version. The Crypto++ library test script, cryptest.sh, goes to great lengths to determine processor features, and then provide the proper set of defines and options. You will likely need to perform the same to get the best performance from the library.

Library Defines

Th Crypto++ library depends upon GCC style preprocessor macros like , and to enable code paths. Here's an example of a simple one-liner from that uses BMI's instruction:

#if defined(__GNUC__) && defined(__BMI__) template <> inline bool IsPowerOf2<word32>(const word32 &value) { return value > 0 && _blsr_u32(value) == 0; } #endif

At Crypto++ 5.6.4 the library unconditionally defined in for SunCC at Commit b1df5736a7191eb1. By defining in all users enjoy at least SSE2 support. SSE2 is the majority of the specialized implementations, and it includes both SSE2 ASM and SSE2 intrinsics.

#if !defined(CRYPTOPP_DISABLE_ASM) && !defined(__SSE2__) && defined(__x86_64__) && (__SUNPRO_CC >= 0x5100) # define __SSE2__ 1 #endif

To take advantage of additional CPU features you will have to manually define the missing preprocessor macros. The library's test script does so in an effort to test the code paths by checking the cpu flags with , and then manually adding defines to . is similar to Linux's . Below is a sample output from .

$ isainfo -v 64-bit amd64 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu rdrand 32-bit i386 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu rdrand

If all you want are the , then simply run and note the . Below is from a HP Proliant G5 with Dual-Xeon's.

$ ./cryptest.sh IS_SOLARIS: 1 IS_X64: 1 ... Compiler: Studio 12.5 Sun C++ 5.14 SunOS_i386 2016/05/31 Pathname: /opt/developerstudio12.5/bin/CC ... PLATFORM_CXXFLAGS: -D__SSE2__ -D__SSE3__ -D__SSSE3__ -xarch=ssse3

Here is another example from a Solaris workstation running on a 4th generation Core i5. The 4th gen Core i5 provides up to AVX.

$ ./cryptest.sh IS_SOLARIS: 1 IS_X64: 1 ... Compiler: Sun C++ 5.13 SunOS_i386 2014/10/20 Pathname: /opt/solarisstudio12.4/bin/CC ... PLATFORM_CXXFLAGS: -D__SSE2__ -D__SSE3__ -D__SSSE3__ -D__SSE4_1__ -D__SSE4_2__ -D__PCLMUL__ -D__AES__ -D__RDRND__ -D__AVX__ -xarch=avx

And here is one from a 5th generation Core i5, which includes BMI and ADX.

$ ./cryptest.sh IS_SOLARIS: 1 IS_X64: 1 ... Compiler: Studio 12.5 Sun C++ 5.14 SunOS_i386 2016/05/31 Pathname: /opt/developerstudio12.5/bin/CC ... PLATFORM_CXXFLAGS: -D__SSE2__ -D__SSE3__ -D__SSSE3__ -D__SSE4_1__ -D__SSE4_2__ -D__PCLMUL__ -D__AES__ -D__RDRND__ -D__RDSEED__ -D__AVX__ -D__AVX2__ -D__BMI__ -D__BMI2__ -D__ADX__ -xarch=avx2_i

From the three example above, it can be seen is also a moving target dependent upon both CPU feature flags and compiler version. The compiler will give you a good error message with respect to , so it will be fairly easy to get right. Also see ube error: _mm_aeskeygenassist_si128 intrinsic requires at least -xarch=aes on Stack Overflow.

One non-obvious note you need to enable ADX. ADX provides Add-with-Carry/Add-with-Overflow, which allows pipelining some big integer operations. Also see New Instructions Supporting Large Integer Arithmetic on Intel Architecture Processors whitepaper.

Building Crypto++

Now that you know where the compiler and how to define flags, all you have to do is build the library and test it. All of this is covered in , but its restated here for completeness. Since you are modifying the default , you also have to set the Debug/Release build configuration information. The build configuration information is the below.

The makefile will add the remainder of the flags, like . If you need to change flags like or , then you will need to edit the makefile. There's another way to change flags, and it is discussed in the wiki article.

# Set the preprocessor macros to enable code paths $ export CXXFLAGS="-DNDEBUG -g2 -O2 -D__SSE2__ -D__SSE3__ -D__SSSE3__ -D__SSE4_1 __ -D__SSE4_2__ -D__PCLMUL__ -D__AES__ -D__RDRND__ -D__RDSEED__ -D__AVX__ -D__AV X2__ -D__BMI__ -D__BMI2__ -D__ADX__ -xarch=avx2_i" # Build it! $ CXX=/opt/developerstudio12.5/bin/CC gmake -j 2 /opt/developerstudio12.5/bin/CC -DNDEBUG -g2 -O2 -D__SSE2__ -D__SSE3__ -D__SSSE3 __ -D__SSE4_1__ -D__SSE4_2__ -D__AES__ -D__PCLMUL__ -D__RDRND__ -D__RDSEED__ -D_ _AVX__ -D__AVX2__ -D__BMI__ -D__BMI2__ -D__ADX__ -xarch=avx2_i -m64 -native -KPI C -template=no%extdef -c cryptlib.cpp /opt/developerstudio12.5/bin/CC -DNDEBUG -g2 -O2 -D__SSE2__ -D__SSE3__ -D__SSSE3 __ -D__SSE4_1__ -D__SSE4_2__ -D__AES__ -D__PCLMUL__ -D__RDRND__ -D__RDSEED__ -D_ _AVX__ -D__AVX2__ -D__BMI__ -D__BMI2__ -D__ADX__ -xarch=avx2_i -m64 -native -KPI C -template=no%extdef -c cpu.cpp /opt/developerstudio12.5/bin/CC -DNDEBUG -g2 -O2 -D__SSE2__ -D__SSE3__ -D__SSSE3 __ -D__SSE4_1__ -D__SSE4_2__ -D__AES__ -D__PCLMUL__ -D__RDRND__ -D__RDSEED__ -D_ _AVX__ -D__AVX2__ -D__BMI__ -D__BMI2__ -D__ADX__ -xarch=avx2_i -m64 -native -KPI C -template=no%extdef -c integer.cpp /opt/developerstudio12.5/bin/CC -DNDEBUG -g2 -O2 -D__SSE2__ -D__SSE3__ -D__SSSE3 __ -D__SSE4_1__ -D__SSE4_2__ -D__AES__ -D__PCLMUL__ -D__RDRND__ -D__RDSEED__ -D_ _AVX__ -D__AVX2__ -D__BMI__ -D__BMI2__ -D__ADX__ -xarch=avx2_i -m64 -native -KPI C -template=no%extdef -c shacal2.cpp ...

Testing Crypto++

Once the library builds, you must run the validation suite and test vectors. Solaris is effectively a new platform since the ASM was enabled en masse, so the build must be functionally tested to provide assurances that are taken for granted on Linux and Unix.

You run the tests with and . There should be 0 failures as shown below.

$ ./cryptest.exe v Using seed: 1473740352 Testing Settings... ... All tests passed!

And:

$ ./cryptest.exe tv all Using seed: 1473740479 Testing FileList algorithm all.txt collection. ... Tests complete. Total tests = 5248. Failed tests = 0.

Once the library tests OK, its ready to be installed and used by programs.

Mapfile

The makefile uses a mapfile on i.86pc targets to allow object files to mask or hide additional hardware capability. The mapfile is named , and the recipe is shown below. The library does so because the it will often build for more capable machines. For example, the library will include SHA hardware instructions on an early iCore even though an early iCore cannot execute them.

You can disable the extra code with , where is a feature like , or .

# For SunOS, create a Mapfile that allows our object files to # contain additional bits (like SSE4 and AES on old Xeon) ifeq ($(IS_SUN)$(SUN_COMPILER),11) ifneq ($(IS_X86)$(IS_X32)$(IS_X64),000) ifeq ($(findstring -DCRYPTOPP_DISABLE_ASM,$(CXXFLAGS)),) ifeq ($(wildcard cryptopp.mapfile),) $(shell echo "hwcap_1 = SSE SSE2 OVERRIDE;" > cryptopp.mapfile) $(shell echo "" >> cryptopp.mapfile) endif # Write mapfile LDFLAGS += -M cryptopp.mapfile endif # No CRYPTOPP_DISABLE_ASM endif # X86/X32/X64 endif # SunOS

C++03 and C++11

As of this writing (September 2016), the library will fail to compile with Sun Studio 12.4/SunCC 5.13 with or . We don't know why the compiler crashes as shown below, but we have an open question on Stack Overflow and we reached out to a friend of the project who works for Oracle.

First, this how the compile is supposed to look (using Sun Studio 12.5):

$ /opt/developerstudio12.5/bin/CC -std=c++03 -DNDEBUG -g2 -O2 -D__SSE2__ -D__SSE 3__ -D__SSSE3__ -D__SSE4_1__ -D__SSE4_2__ -D__AES__ -D__PCLMUL__ -D__RDRND__ -D_ _RDSEED__ -D__AVX__ -D__AVX2__ -D__BMI__ -D__BMI2__ -D__ADX__ -xarch=avx2_i -m64 -native -KPIC -template=no%extdef -c gcm.cpp

The next two are the failures when using or .

# Fail with C++03 $ /opt/solarisstudio12.4/bin/CC -std=c++03 -DNDEBUG -g2 -O2 -D__SSE2__ -D__SSE3_ _ -D__SSSE3__ -D__SSE4_1__ -D__SSE4_2__ -D__AES__ -D__PCLMUL__ -D__RDRND__ -D__A VX__ -xarch=avx -m64 -native -KPIC -template=no%extdef -c gcm.cpp >> Assertion: (../lnk/g3mangler.cc, line 825) while processing gcm.cpp at line 413.# Fail with C++11 $ /opt/solarisstudio12.4/bin/CC -std=c++11 -DNDEBUG -g2 -O2 -D__SSE2__ -D__SSE3_ _ -D__SSSE3__ -D__SSE4_1__ -D__SSE4_2__ -D__AES__ -D__PCLMUL__ -D__RDRND__ -D__A VX__ -xarch=avx -m64 -native -KPIC -template=no%extdef -c gcm.cpp >> Assertion: (../lnk/g3mangler.cc, line 825) while processing gcm.cpp at line 413.

The or crash can be reproduced using Sun Studio 12.3/SunCC 5.12 by using upto and .

We know of two rather poor work-arounds. First, you can clamp features at and . Second, you can avoid using or .

AES-NI and CLMUL

As of this writing (September 2016), AES-NI and Carryless Multiply under Sun Studio 12.3/SunCC 5.12 is another Solaris issue we are struggling with. It does not appear to be related to C++03 and C++011 errors:

$ /opt/solarisstudio12.3/bin/CC -DNDEBUG -g -O2 -D__SSE2__ -D__SSE3__ -D__SSSE3_ _ -D__SSE4_1__ -D__SSE4_2__ -D__AES__ -D__PCLMUL__ -xarch=aes -m64 -KPIC -templa te=no%extdef -c gcm.cpp assertion failed in function bfd_asm_lf_dump() @ bfd_asm.c:3286 assert(mit_alternates_has_(op, IMM_ALTERNATE)) CC: ube failed for gcm.cpp

We isolated the issue to and reported it to Oracle though a private contact (we don't have a service contract). The work around for the issue is to avoid CLMUL when using Sun Studio 12.3 and 12.4. We disabled CLMUL in GCM for SunCC 12.3 and 12.4:

// http://github.com/weidai11/cryptopp/issues/226 #if defined(__SUNPRO_CC) && (__SUNPRO_CC <= 0x5130) # undef CRYPTOPP_CLMUL_AVAILABLE #endif

Solaris 10 and below

The previous information was based on Solaris 11 on Intel hardware using Sun Studio 12. We purchased an UltrSparc for testing Solaris 10. However, the machine arrived and it was an Intel Core2 Duo (and not an UltraSparc), and it lacked an Operating System. Oracle does not make the older Operating Systems available; nor do they make the older Sun Studio's available. Finally, Oracle does not answer email requests for the same.

We were not able to do any testing with Solaris 10 or Sun Studio 11. It may work, or it may not work. "Patches are welcomed", as they say. Also see the section below on it used to Work!!!.

Sparc and UltraSparc

We lack access to a Sparc or UltraSparc machine so there's nothing interesting to discuss. A Sparc and UltraSparc should have vanilla flags, and it should look similar to the following. Again, it may work, or it may not work. "Patches are welcomed", as they say.

$ gmake -j 2 CC -DNDEBUG -g2 -O2 -m64 -native -KPIC -template=no%extdef -c cryptlib.cpp CC -DNDEBUG -g2 -O2 -m64 -native -KPIC -template=no%extdef -c cpu.cpp CC -DNDEBUG -g2 -O2 -m64 -native -KPIC -template=no%extdef -c integer.cpp CC -DNDEBUG -g2 -O2 -m64 -native -KPIC -template=no%extdef -c shacal2.cpp ...

It used to work!!!

If Crypto++ used to work for you under Crypto++ 5.6.2, but fails to work as expected under 5.6.3 or 5.6.4, then its probably due to changes or changes in the source code. You can use Git to go back in time to help isolate the problem. The example below uses the Crypto++ 5.6.2 makefile to build the latest sources.

$ git clone https://github.com/weidai11/cryptopp cryptopp-past-and-present Cloning into 'cryptopp-past-and-present'... ... $ cd cryptopp-past-and-present $ git checkout CRYPTOPP_5_6_2 Note: checking out 'CRYPTOPP_5_6_2'. You are in 'detached HEAD' state... $ cp GNUmakefile GNUmakefile-5.6.2 # save the old makefile $ git checkout master -f # or 'checkout CRYPTOPP_5_6_5' $ cp bench1.cpp bench.cpp # account for the bench.cpp -> bench1.cpp rename # which occurred at Crypto++ 5.6.3 (5.6.2 lacks it) $ make -f GNUmakefile-5.6.2 # use the old makefile c++ -DNDEBUG -g -O2 -DCRYPTOPP_DISABLE_ASM -pipe -c shacal2.cpp c++ -DNDEBUG -g -O2 -DCRYPTOPP_DISABLE_ASM -pipe -c md5.cpp c++ -DNDEBUG -g -O2 -DCRYPTOPP_DISABLE_ASM -pipe -c shark.cpp c++ -DNDEBUG -g -O2 -DCRYPTOPP_DISABLE_ASM -pipe -c zinflate.cpp ...

We use the technique above often to determine if we introduced a break.

0 Thoughts to “Badly Formed Macro Assignment Makefile

Leave a comment

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *