dcmdump(1)

NAME

   dcmdump - Dump DICOM file and data set

SYNOPSIS

   dcmdump [options] dcmfile-in...

DESCRIPTION

   The  dcmdump utility dumps the contents of a DICOM file (file format or
   raw data set) to stdout in textual form.  Attributes  with  very  large
   value  fields  (e.g.  pixel  data)  can be described as '(not loaded)'.
   String value fields will be delimited with square brackets ([]).  Known
   UIDs  will be displayed by their names prefixed by an equals sign (e.g.
   '=MRImageStorage') unless this mapping  would  be  explicitly  switched
   off. Empty value fields are described as '(no value available)'.

   If dcmdump reads a raw data set (DICOM data without a file format meta-
   header) it will attempt to guess the transfer syntax by  examining  the
   first  few  bytes  of  the file. It is not always possible to correctly
   guess the transfer syntax and it is better to convert a data set  to  a
   file  format  whenever possible (using the dcmconv utility). It is also
   possible to use the -f and -t[ieb] options to force dcmdump to  read  a
   dataset with a particular transfer syntax.

PARAMETERS

   dcmfile-in  DICOM input file or directory to be dumped

OPTIONS

   general options
     -h   --help
            print this help text and exit

          --version
            print version information and exit

          --arguments
            print expanded command line arguments

     -q   --quiet
            quiet mode, print no warnings and errors

     -v   --verbose
            verbose mode, print processing details

     -d   --debug
            debug mode, print debug information

     -ll  --log-level  [l]evel: string constant
            (fatal, error, warn, info, debug, trace)
            use level l for the logger

     -lc  --log-config  [f]ilename: string
            use config file f for the logger

   input options
   input file format:

     +f   --read-file
            read file format or data set (default)

     +fo  --read-file-only
            read file format only

     -f   --read-dataset
            read data set without file meta information

   input transfer syntax:

     -t=  --read-xfer-auto
            use TS recognition (default)

     -td  --read-xfer-detect
            ignore TS specified in the file meta header

     -te  --read-xfer-little
            read with explicit VR little endian TS

     -tb  --read-xfer-big
            read with explicit VR big endian TS

     -ti  --read-xfer-implicit
            read with implicit VR little endian TS

   input files:

     +sd  --scan-directories
            scan directories for input files (dcmfile-in)

     +sp  --scan-pattern  [p]attern: string (only with --scan-directories)
            pattern for filename matching (wildcards)

            # possibly not available on all systems

     -r   --no-recurse
            do not recurse within directories (default)

     +r   --recurse
            recurse within specified directories

   long tag values:

     +M   --load-all
            load very long tag values (default)

     -M   --load-short
            do not load very long values (e.g. pixel data)

     +R   --max-read-length  [k]bytes: integer (4..4194302, default: 4)
            set threshold for long values to k kbytes

   parsing of file meta information:

     +ml  --use-meta-length
            use file meta information group length (default)

     -ml  --ignore-meta-length
            ignore file meta information group length

   parsing of odd-length attributes:

     +ao  --accept-odd-length
            accept odd length attributes (default)

     +ae  --assume-even-length
            assume real length is one byte larger

   handling of explicit VR:

     +ev  --use-explicit-vr
            use explicit VR from dataset (default)

     -ev  --ignore-explicit-vr
            ignore explicit VR (prefer data dictionary)

   handling of non-standard VR:

     +vr  --treat-as-unknown
            treat non-standard VR as unknown (default)

     -vr  --assume-implicit
            try to read with implicit VR little endian TS

   handling of undefined length UN elements:

     +ui  --enable-cp246
            read undefined len UN as implicit VR (default)

     -ui  --disable-cp246
            read undefined len UN as explicit VR

   handling of defined length UN elements:

     -uc  --retain-un
            retain elements as UN (default)

     +uc  --convert-un
            convert to real VR if known

   handling of private max-length elements (implicit VR):

     -sq  --maxlength-dict
            read as defined in dictionary (default)

     +sq  --maxlength-seq
            read as sequence with undefined length

   handling of wrong delimitation items:

     -rd  --use-delim-items
            use delimitation items from dataset (default)

     +rd  --replace-wrong-delim
            replace wrong sequence/item delimitation items

   general handling of parser errors:

     +Ep  --ignore-parse-errors
            try to recover from parse errors

     -Ep  --handle-parse-errors
            handle parse errors and stop parsing (default)

   other parsing options:

     +st  --stop-after-elem  [t]ag: "gggg,eeee" or dictionary name
            stop parsing after element specified by t

   automatic data correction:

     +dc  --enable-correction
            enable automatic data correction (default)

     -dc  --disable-correction
            disable automatic data correction

   bitstream format of deflated input:

     +bd  --bitstream-deflated
            expect deflated bitstream (default)

     +bz  --bitstream-zlib
            expect deflated zlib bitstream

   processing options
   specific character set:

     +U8  --convert-to-utf8
            convert all element values that are affected
            by Specific Character Set (0008,0005) to UTF-8

            # requires support from the libiconv toolkit

   output options
   printing:

     +L   --print-all
            print long tag values completely

     -L   --print-short
            print long tag values shortened (default)

     +T   --print-tree
            print hierarchical structure as a simple tree

     -T   --print-indented
            print hierarchical structure indented (default)

     +F   --print-filename
            print header with filename for each input file

     +Fs  --print-file-search
            print header with filename only for those input files
            that contain one of the searched tags

   mapping:

     +Un  --map-uid-names
            map well-known UID numbers to names (default)

     -Un  --no-uid-names
            do not map well-known UID numbers to names

   quoting:

     +Qn  --quote-nonascii
            quote non-ASCII and control chars as XML markup

     +Qo  --quote-as-octal
            quote non-ASCII and control chars as octal numbers

     -Qn  --print-nonascii
            print non-ASCII and control chars (default)

   color:

     +C   --print-color
            use ANSI escape codes for colored output

            # not available on Windows systems

     -C   --no-color
            do not use any ANSI escape codes (default)

            # not available on Windows systems

   error handling:

     -E   --stop-on-error
            do not print if file is damaged (default)

     +E   --ignore-errors
            attempt to print even if file is damaged

   searching:

     +P   --search  [t]ag: "gggg,eeee" or dictionary name
            print the textual dump of tag t
            this option can be specified multiple times
            (default: the complete file is printed)

     +s   --search-all
            print all instances of searched tags (default)

     -s   --search-first
            only print first instance of searched tags

     +p   --prepend
            prepend sequence hierarchy to printed tag,
            denoted by: (gggg,eeee).(gggg,eeee).*
            (only when used with --search)

     -p   --no-prepend
            do not prepend hierarchy to tag (default)

   writing:

     +W   --write-pixel  [d]irectory: string
            write pixel data to a .raw file stored in d
            (little endian, filename created automatically)

NOTES

   Adding  directories as a parameter to the command line only makes sense
   if option --scan-directories  is  also  given.  If  the  files  in  the
   provided  directories  should  be selected according to a specific name
   pattern (e.g. using wildcard matching), option --scan-pattern has to be
   used.  Please  note  that  this  file pattern only applies to the files
   within  the  scanned  directories,  and,  if  any  other  patterns  are
   specified  on  the command line outside the --scan-pattern option (e.g.
   in order to select further files), these do not apply to the  specified
   directories.

LOGGING

   The  level  of  logging  output  of  the various command line tools and
   underlying libraries can be specified by the  user.  By  default,  only
   errors  and  warnings  are  written to the standard error stream. Using
   option --verbose also informational messages  like  processing  details
   are  reported.  Option  --debug  can be used to get more details on the
   internal activity, e.g. for debugging purposes.  Other  logging  levels
   can  be  selected  using option --log-level. In --quiet mode only fatal
   errors are reported. In such very severe error events, the  application
   will  usually  terminate.  For  more  details  on the different logging
   levels, see documentation of module 'oflog'.

   In case the logging output should be written to file  (optionally  with
   logfile  rotation),  to syslog (Unix) or the event log (Windows) option
   --log-config can be used.  This  configuration  file  also  allows  for
   directing  only  certain messages to a particular output stream and for
   filtering certain messages based on the  module  or  application  where
   they  are  generated.  An  example  configuration  file  is provided in
   <etcdir>/logger.cfg.

COMMAND LINE

   All command line tools  use  the  following  notation  for  parameters:
   square  brackets  enclose  optional  values  (0-1), three trailing dots
   indicate that multiple values are allowed (1-n), a combination of  both
   means 0 to n values.

   Command line options are distinguished from parameters by a leading '+'
   or '-' sign, respectively. Usually, order and position of command  line
   options  are  arbitrary  (i.e.  they  can appear anywhere). However, if
   options are mutually exclusive the rightmost appearance is  used.  This
   behavior  conforms  to  the  standard  evaluation  rules of common Unix
   shells.

   In addition, one or more command files can be specified  using  an  '@'
   sign  as  a  prefix to the filename (e.g. @command.txt). Such a command
   argument is replaced by the content  of  the  corresponding  text  file
   (multiple  whitespaces  are  treated  as a single separator unless they
   appear between two quotation marks) prior to  any  further  evaluation.
   Please  note  that  a command file cannot contain another command file.
   This simple but effective  approach  allows  one  to  summarize  common
   combinations  of  options/parameters  and  avoids longish and confusing
   command lines (an example is provided in file <datadir>/dumppat.txt).

ENVIRONMENT

   The dcmdump utility  will  attempt  to  load  DICOM  data  dictionaries
   specified  in the DCMDICTPATH environment variable. By default, i.e. if
   the  DCMDICTPATH  environment   variable   is   not   set,   the   file
   <datadir>/dicom.dic  will be loaded unless the dictionary is built into
   the application (default for Windows).

   The  default  behavior  should  be  preferred   and   the   DCMDICTPATH
   environment  variable  only used when alternative data dictionaries are
   required. The DCMDICTPATH environment variable has the same  format  as
   the  Unix  shell PATH variable in that a colon (':') separates entries.
   On Windows systems, a semicolon (';') is used as a separator. The  data
   dictionary  code  will  attempt  to  load  each  file  specified in the
   DCMDICTPATH environment variable. It is an error if no data  dictionary
   can be loaded.

SEE ALSO

   dump2dcm(1), dcmconv(1)

COPYRIGHT

   Copyright  (C)  1994-2014  by OFFIS e.V., Escherweg 2, 26121 Oldenburg,
   Germany.



Opportunity


Personal Opportunity - Free software gives you access to billions of dollars of software at no cost. Use this software for your business, personal use or to develop a profitable skill. Access to source code provides access to a level of capabilities/information that companies protect though copyrights. Open source is a core component of the Internet and it is available to you. Leverage the billions of dollars in resources and capabilities to build a career, establish a business or change the world. The potential is endless for those who understand the opportunity.

Business Opportunity - Goldman Sachs, IBM and countless large corporations are leveraging open source to reduce costs, develop products and increase their bottom lines. Learn what these companies know about open source and how open source can give you the advantage.


Free Software


Free Software provides computer programs and capabilities at no cost but more importantly, it provides the freedom to run, edit, contribute to, and share the software. The importance of free software is a matter of access, not price. Software at no cost is a benefit but ownership rights to the software and source code is far more significant.

Free Office Software - The Libre Office suite provides top desktop productivity tools for free. This includes, a word processor, spreadsheet, presentation engine, drawing and flowcharting, database and math applications. Libre Office is available for Linux or Windows.


Free Books


The Free Books Library is a collection of thousands of the most popular public domain books in an online readable format. The collection includes great classical literature and more recent works where the U.S. copyright has expired. These books are yours to read and use without restrictions.

Source Code - Want to change a program or know how it works? Open Source provides the source code for its programs so that anyone can use, modify or learn how to write those programs themselves. Visit the GNU source code repositories to download the source.


Education


Study at Harvard, Stanford or MIT - Open edX provides free online courses from Harvard, MIT, Columbia, UC Berkeley and other top Universities. Hundreds of courses for almost all major subjects and course levels. Open edx also offers some paid courses and selected certifications.

Linux Manual Pages - A man or manual page is a form of software documentation found on Linux/Unix operating systems. Topics covered include computer programs (including library and system calls), formal standards and conventions, and even abstract concepts.