GNU bug report logs - #17280
"Untangle" script to lay foundations for refactoring dfa.c

Please note: This is a static page, with minimal formatting, updated once a day.
Click here to see this page with the latest information and nicer formatting.

Package: grep; Severity: wishlist; Reported by: behoffski <behoffski@HIDDEN>; dated Thu, 17 Apr 2014 02:52:01 UTC; Maintainer for grep is bug-grep@HIDDEN.
Severity set to 'wishlist' from 'normal' Request was from Paul Eggert <eggert@HIDDEN> to control <at> debbugs.gnu.org. Full text available.

Message received at 17280 <at> debbugs.gnu.org:


Received: (at 17280) by debbugs.gnu.org; 17 Apr 2014 05:50:16 +0000
From debbugs-submit-bounces <at> debbugs.gnu.org Thu Apr 17 01:50:16 2014
Received: from localhost ([127.0.0.1]:50154 helo=debbugs.gnu.org)
	by debbugs.gnu.org with esmtp (Exim 4.80)
	(envelope-from <debbugs-submit-bounces <at> debbugs.gnu.org>)
	id 1WafDE-0007Gf-0H
	for submit <at> debbugs.gnu.org; Thu, 17 Apr 2014 01:50:16 -0400
Received: from smtp.cs.ucla.edu ([131.179.128.62]:43518)
 by debbugs.gnu.org with esmtp (Exim 4.80)
 (envelope-from <eggert@HIDDEN>) id 1WafDA-0007GP-Lb
 for 17280 <at> debbugs.gnu.org; Thu, 17 Apr 2014 01:50:13 -0400
Received: from localhost (localhost.localdomain [127.0.0.1])
 by smtp.cs.ucla.edu (Postfix) with ESMTP id 657FFA60015;
 Wed, 16 Apr 2014 22:50:06 -0700 (PDT)
X-Virus-Scanned: amavisd-new at smtp.cs.ucla.edu
Received: from smtp.cs.ucla.edu ([127.0.0.1])
 by localhost (smtp.cs.ucla.edu [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id wMeeG1log6AY; Wed, 16 Apr 2014 22:49:57 -0700 (PDT)
Received: from [192.168.1.9] (pool-108-0-233-62.lsanca.fios.verizon.net
 [108.0.233.62])
 by smtp.cs.ucla.edu (Postfix) with ESMTPSA id C7E4CA6000F;
 Wed, 16 Apr 2014 22:49:57 -0700 (PDT)
Message-ID: <534F6B80.7080209@HIDDEN>
Date: Wed, 16 Apr 2014 22:49:52 -0700
From: Paul Eggert <eggert@HIDDEN>
Organization: UCLA Computer Science Department
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
 rv:24.0) Gecko/20100101 Thunderbird/24.4.0
MIME-Version: 1.0
To: behoffski <behoffski@HIDDEN>, 17280 <at> debbugs.gnu.org
Subject: Re: bug#17280: "Untangle" script to lay foundations for refactoring
 dfa.c
References: <534F4124.2080604@HIDDEN>
In-Reply-To: <534F4124.2080604@HIDDEN>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-Spam-Score: -3.0 (---)
X-Debbugs-Envelope-To: 17280
X-BeenThere: debbugs-submit <at> debbugs.gnu.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: <debbugs-submit.debbugs.gnu.org>
List-Unsubscribe: <http://debbugs.gnu.org/cgi-bin/mailman/options/debbugs-submit>, 
 <mailto:debbugs-submit-request <at> debbugs.gnu.org?subject=unsubscribe>
List-Archive: <http://debbugs.gnu.org/cgi-bin/mailman/private/debbugs-submit/>
List-Post: <mailto:debbugs-submit <at> debbugs.gnu.org>
List-Help: <mailto:debbugs-submit-request <at> debbugs.gnu.org?subject=help>
List-Subscribe: <http://debbugs.gnu.org/cgi-bin/mailman/listinfo/debbugs-submit>, 
 <mailto:debbugs-submit-request <at> debbugs.gnu.org?subject=subscribe>
Errors-To: debbugs-submit-bounces <at> debbugs.gnu.org
Sender: "Debbugs-submit" <debbugs-submit-bounces <at> debbugs.gnu.org>
X-Spam-Score: -3.0 (---)

Thanks, wow.  I like the idea of splitting things apart, as dfa.c is 
indeed too large and tricky.  This'll have to wait until after the next 
grep release, at least in terms of investment of time that I can make in 
terms of reviewing.  At this point we have six performance patches in 
the queue (17136, 17203, 17204, 17229, 17230, 17240) and my hope was 
that we can put out the next version once they're reviewed.

There's also the issue that gawk uses dfa.c etc., so any changes we make 
to grep should be done with gawk in mind.




Information forwarded to bug-grep@HIDDEN:
bug#17280; Package grep. Full text available.

Message received at submit <at> debbugs.gnu.org:


Received: (at submit) by debbugs.gnu.org; 17 Apr 2014 02:51:52 +0000
From debbugs-submit-bounces <at> debbugs.gnu.org Wed Apr 16 22:51:52 2014
Received: from localhost ([127.0.0.1]:50096 helo=debbugs.gnu.org)
	by debbugs.gnu.org with esmtp (Exim 4.80)
	(envelope-from <debbugs-submit-bounces <at> debbugs.gnu.org>)
	id 1WacQZ-0002Fn-Tx
	for submit <at> debbugs.gnu.org; Wed, 16 Apr 2014 22:51:52 -0400
Received: from eggs.gnu.org ([208.118.235.92]:46977)
 by debbugs.gnu.org with esmtp (Exim 4.80)
 (envelope-from <behoffski@HIDDEN>) id 1WacQV-0002FV-If
 for submit <at> debbugs.gnu.org; Wed, 16 Apr 2014 22:51:47 -0400
Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71)
 (envelope-from <behoffski@HIDDEN>) id 1WacPu-0003Dy-2Z
 for submit <at> debbugs.gnu.org; Wed, 16 Apr 2014 22:51:42 -0400
X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on eggs.gnu.org
X-Spam-Level: ***
X-Spam-Status: No, score=3.7 required=5.0 tests=BAYES_50,FILL_THIS_FORM,
 FILL_THIS_FORM_LOAN autolearn=disabled version=3.3.2
Received: from lists.gnu.org ([2001:4830:134:3::11]:51092)
 by eggs.gnu.org with esmtp (Exim 4.71)
 (envelope-from <behoffski@HIDDEN>) id 1WacPt-0003Dm-LI
 for submit <at> debbugs.gnu.org; Wed, 16 Apr 2014 22:51:09 -0400
Received: from eggs.gnu.org ([2001:4830:134:3::10]:48461)
 by lists.gnu.org with esmtp (Exim 4.71)
 (envelope-from <behoffski@HIDDEN>) id 1WacPO-0004pA-1S
 for bug-grep@HIDDEN; Wed, 16 Apr 2014 22:51:09 -0400
Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71)
 (envelope-from <behoffski@HIDDEN>) id 1WacOs-00031J-RA
 for bug-grep@HIDDEN; Wed, 16 Apr 2014 22:50:37 -0400
Received: from ipmail06.adl6.internode.on.net
 ([2001:44b8:8060:ff02:300:1:6:6]:19514)
 by eggs.gnu.org with esmtp (Exim 4.71)
 (envelope-from <behoffski@HIDDEN>) id 1WacOh-0002nL-23
 for bug-grep@HIDDEN; Wed, 16 Apr 2014 22:50:06 -0400
X-IronPort-Anti-Spam-NotFiltered: toobig
Received: from ppp14-2-47-72.lns21.adl2.internode.on.net (HELO [192.168.1.1])
 ([14.2.47.72])
 by ipmail06.adl6.internode.on.net with ESMTP; 17 Apr 2014 12:19:12 +0930
Message-ID: <534F4124.2080604@HIDDEN>
Date: Thu, 17 Apr 2014 12:19:08 +0930
From: behoffski <behoffski@HIDDEN>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
 rv:24.0) Gecko/20100101 Thunderbird/24.4.0
MIME-Version: 1.0
To: bug-grep@HIDDEN
Subject: "Untangle" script to lay foundations for refactoring dfa.c
Content-Type: multipart/mixed; boundary="------------060207090305090503050301"
X-detected-operating-system: by eggs.gnu.org: Genre and OS details not
 recognized.
X-detected-operating-system: by eggs.gnu.org: Error: Malformed IPv6 address
 (bad octet value).
X-Received-From: 2001:4830:134:3::11
X-Debbugs-Envelope-To: submit
X-BeenThere: debbugs-submit <at> debbugs.gnu.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: <debbugs-submit.debbugs.gnu.org>
List-Unsubscribe: <http://debbugs.gnu.org/cgi-bin/mailman/options/debbugs-submit>, 
 <mailto:debbugs-submit-request <at> debbugs.gnu.org?subject=unsubscribe>
List-Archive: <http://debbugs.gnu.org/cgi-bin/mailman/private/debbugs-submit/>
List-Post: <mailto:debbugs-submit <at> debbugs.gnu.org>
List-Help: <mailto:debbugs-submit-request <at> debbugs.gnu.org?subject=help>
List-Subscribe: <http://debbugs.gnu.org/cgi-bin/mailman/listinfo/debbugs-submit>, 
 <mailto:debbugs-submit-request <at> debbugs.gnu.org?subject=subscribe>
Errors-To: debbugs-submit-bounces <at> debbugs.gnu.org
Sender: "Debbugs-submit" <debbugs-submit-bounces <at> debbugs.gnu.org>

This is a multi-part message in MIME format.
--------------060207090305090503050301
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

G'day,

I decided, back in October, to seriously scratch an itch that I've been
wanting to scratch ever since about 2000 (!).  This is my desire to
include performance enhancements that I discovered while writing Grouse
Grep, which was published in Dr. Dobb's Journal in November 1997, and
which was updated in 2000 to run on Linux machines.

The performance improvements relate to the Boyer-Moore skip search
algorithm, and in particular:
    1. Case-insensitive searches are feasible for unibyte locales;
    2. The self-tuning Boyer-Moore (STBM) search that I invented does
       reduce some pathological cases, and also can speed up the
       normal search cases by a few percent.  It's certainly worth
       trying, but I'm not sure if it will be worthwhile across the
       huge range of use cases that Grep has to cover; and
    3. Case-insensitivity is a special form of classes being able to
       be handled by the B-M skip search.  There may be a few other
       cases where classes could be considered part of a "must-have"
       string, e.g. "fred bloggs[xyz]".  This case (the class at the
       tail of the string) is fairly easy; modifying the match for
       class matching elsewhere may be worthwhile, but the benefits
       dwindle quickly as the number of matching characters goes up,
       and the costs of matching classes comes to dominate.

I initially submitted a patch entitled "Patch to speed up GNU Grep,
mostly with -i option" on 6 July 2003, but did not have the necessary
copyright assignments in place, and then personal life intervened and
I was unable to proceed at that time.  Now, however, I do have the
relevant paperwork in place, and so am wanting to try again.

------- Lua script: untangle

Writing in Lua (htttp://www.lua.org/), which is a small, powerful,
readable, fairly popular, efficient and liberally-licensed scripting
language, I've written a longish script, called "untangle", that takes
source code, mainly dfa.c, but also dfa.h and Makefile.am, and
provides tools to pull apart the code into segments of different
sizes, and then put them back together in different ways, possibly
applying edits in some cases.

The script works with both Lua 5.1 and 5.2, and uses a module from
LuaRocks called "strictness", that flags global variables as errors
unless explicitly declared (local variables are always declared
explicitly, and a global reference is almost always a typo).

This script has waxed and waned over time; its progression has not been
linear, and this shows in some places.  I've tended to write notes to
myself as I go along; sometimes these are in the script comments; at
other times they are in the emitted code.  I apologise in advance for
any offence that comments may give; these should be read as entirely
notes to myself regarding reactions to the code that I'm working on,
and at times this might be because of a half-baked understanding of the
complexities of the problem that the code is attempting to handle.
Refactoring code is always fraught with danger, and these comments were
intended to help me flag areas where more care was needed.

I've tried to conform to GNU and existing coding standards in the code
I've generated; the Lua script, however, mostly reflects my personal
style preferences.  The script is quite long, but this is somewhat
inflated by the use of a patch-like facility to do text substitutions.

The script has some features to check the integrity of the segmentation
effort:  It check for non-empty lines that have been skipped between
segment selections, ensures that each segment name is unique, and
writes out a "reconstructed" version of each source file, so that any
overlaps or other glitches in the segmentation effort can be caught.

-------- Resulting changes (additions) to grep/src:

The script tries to break dfa.c into smaller, self-contained modules,
with each module's internals strongly hidden from outsiders, so that
changes can be made with less fear of unintended side-effects.

Each module has a "fsa" prefix.  This is because the modules are
search-engine-agnostic; they could be used by either a deterministic
or on non-deterministic matcher, or by a kwset matcher, or by a
Boyer-Moore-Gosper/Sunday matcher, etc.

I've tended to use "?? " (a trigraph, I know, but *always* for a
space) to mark areas where there is some doubt or multiple options
available at a point in the script or in the code, and I've made a
choice without being very confident that my selection is the best.
These markers could perhaps be read as "REVIEWME: " or "FIXME: ".

The new modules are:

   charclass.[ch]
      Classes are now purely opaque; the client gets a pointer, but does
      not know how the class is implemented.  Functions to map between
      indexes and pointers are provided, so the previous index" scheme,
      used as an implied parameter to CSET, still works.  However,
      realloc has been abandoned for holding class contents, in favour
      of a "list of pools" approach, so that the pointer for a class
      remains valid for the lifetime of the module.  This, in turn, leads
      to more opportunities for lazy caching, such as in the lexer's
      find_pred function.

      The module continues to aggressively reuse existing classes in
      preference to creating duplicates; a class-state variable, with
      values UNUSED/WORKING/FINALISED, has been added, so that
      de-duplication searches only consider finalised classes.

      More functions have been added: Bits can be set/cleared in ranges,
      and set union and set intersection have been implemented.  Ranges
      are useful to set up utf8 octets without needing to know
      implementation details.  Some of these range operations could be
      optimised (e.g. set k bits via ((1UL << k) - 1), but I'm just not
      in the mood to take this on at present).

      In the future, SIMD operations could be used within the charclass
      module, without any disruption to outsiders.

      A "gutter" has been added to each side of each class.  The main
      reason for this is so that setbit(EOF) and clrbit(EOF), where EOF
      is -1, can be executed harmlessly, without adding overhead to normal
      operations.

      A single class module instance is shared by all users; at present,
      the code is not thread-safe, but a mutex could be used to protect
      areas where race conditions might occur.

   fsatoken.[ch]:
      Defines the lexical token shared by other modules, plus some tools
      to assist debugging (e.g. prtok).

   fsalex.[ch]:
      First receives directive regarding regular expressing syntax (e.g.
      whether backreferences are supported, or whether NUL is to be
      included in the "." class), then receives a pattern string to work
      on, and, given this information, supplies a stream of tokens,
      possibly with associated information (e.g. {min,max} parameters),
      to a client (probably fsaparse).

   fsaparse.[ch]:
      Works with a lexer to translate a stream of tokens into a parse
      tree, with the tokens flattened into a linear list, and the tree
      structure imposed by adding postfix tokens to describe relationships
      between preceding lists (trees?) of tokens.

   fsamusts.[ch]:
      Given a postfix-tree token list supplied by fsaparse, find a
      single simple string (if any) that is a must-have item if the
      expression can possibly match, and add any such string to a linked
      list of "musts".

   dfa-prl.c
      A "parallel" version of dfa.c, with extra code ("HOOK:") added at
      various places, so that the existing code and the new code can be
      run side-by-side, and the outputs compared.  The debug output from
      these hooks is appended to the file /tmp/parallel.log.  This file,
      and these hooks, are mostly for debugging at present; however,
      the ability to hook in a different lexer into the parser may be
      valuable in the future.

-------- Long names, painful, but hopefully only in the short term

Nearly all externally-visible names in each module have the module name
as a prefix (e.g. fsalex_ or FSALEX_).  This results in very long
names in some places (e.g. FSATOKEN_TK_CSET instead of CSET).  The code
looks ungainly as a result.

The major benefit of this approach is that I can link the old and new
dfa/fsa code side-by-side in a single executable, and try out various
combinations of old code calling new, new code calling old, or even
old code calling both old and new in turn, without linker namespace
clashes.  This feature is used in various ways by dfa-prl.c, and
especially in how information is logged to /tmp/parallel.log.

So, please don't be put off by disruptions to indentation, or the
presence of overly long lines, as a result of this naming style.  The
code is a demonstration/discussion vehicle, and once merits and/or
deficiencies are decided, changes can be made as appropriate.

-------- Locale, re_syntax and other configuration items

At present, there is an assumption that the same locale is present
for the duration of the execution of the dfa machinery.  I've tried to
tighten this, so that the locale present when fsalex_syntax() is called
is the locale to be used, regardless of later global changes.  The code
is not perfect at achieving this; some areas are better others.

-------- Where to next?  Provocation, discussion, timeline?

The special case in the short term that I am aiming at is
case-insensitive Boyer-Moore string searching.  My intended approach
is to recondition dfa.c without major changes, then rework modules by
introducing a more expressive token syntax that allows concepts such as
case-insensitive characters to be named explicitly, then have a fairly
easy and direct task of converting these outputs into formats suitable
for high-performance search algorithms.

Norihiro Tanaka, however, has very neatly devised ways of making an
end-run around the hairier bits of the code, and come out with a
case-insensitive B-M match at the other end.  He's also come out with
a whole raft of other impressive improvements, working in conjunction
with the grep, awk and other GNU component teams.  However, I still
believe that there is some merit in the approach I'm proposing.

I've chosen to be aggressive in some of my changes (e.g. demanding
strict modularity; changing charclass storage so that the type is truly
opaque, and yet the pointer remains valid for the lifetime of the module;
my reworking of FETCH_WC, which was partially made obsolete when
MBS_SUPPORT variants were dropped; the way that I've reworked find_pred,
including using mbrtowc_cache; and rewriting closure ()).  These are
intended to stimulate discussion, and not simply a change for the sake
of a change.

Sometimes, a change in the way information is shared comes up with
neat surprises: Look at how fsalex shares dotclass with fsaparse, and
so neatly handles the single-byte match case of utf8-anychar, without
fsaparse needing to have any detailed knowledge of either the locale
or of selections made by regular expression option bits.

The code is *not* bug-free; the most obvious failure is that the
arrays detailing multibyte character classes are not handed from
fsalex to fsaparse.  Although I haven't checked further, it is likely
that some features that were previously exposed by lex and parse, that
are now hidden, are needed by the higher DFA building machinery; I
haven't investigated this yet.  Also, the code does not try to free up
resources cleanly.  I believe that sharing the code now, and starting
a discussion, is better than trying to polish things further.

I have only tried this code on one machine, running a recent version of
gcc and the GNU toolchain (under Gentoo).  While I've worked on
different CPUs, including different int sizes and different endianness,
I've had less experience with legacy Unix systems, and with the
differences between Unix/Linux/BSD etc.

So, treat this code as a demonstration/discussion starting point, and
over time hopefully risk/benefit analysis can help decide what to do
next.

--------------------

I've attached both the untangle script, and all the files created and/or
modified by it.  I've also attached the "strictness.lua" module,
extracted from LuaRocks.

cheers,

behoffski (Brenton Hoff)
Programmer, Grouse Software


--------------060207090305090503050301
Content-Type: text/plain; charset=us-ascii;
 name="Makefile.am"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="Makefile.am"

## Process this file with automake to create Makefile.in
# Copyright 1997-1998, 2005-2014 Free Software Foundation, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program.  If not, see <http://www.gnu.org/licenses/>.

LN = ln

AM_CFLAGS = $(WARN_CFLAGS) $(WERROR_CFLAGS)

# Tell the linker to omit references to unused shared libraries.
AM_LDFLAGS = $(IGNORE_UNUSED_LIBRARIES_CFLAGS)

bin_PROGRAMS = grep
bin_SCRIPTS = egrep fgrep
grep_SOURCES = grep.c searchutils.c \
          charclass.c fsatoken.c fsalex.c fsaparse.c fsamusts.c dfa-prl.c dfasearch.c \
          kwset.c kwsearch.c \
          pcresearch.c
noinst_HEADERS = grep.h dfa.h kwset.h search.h system.h

# Sometimes, the expansion of $(LIBINTL) includes -lc which may
# include modules defining variables like 'optind', so libgreputils.a
# must precede $(LIBINTL) in order to ensure we use GNU getopt.
# But libgreputils.a must also follow $(LIBINTL), since libintl uses
# replacement functions defined in libgreputils.a.
LDADD = \
  ../lib/libgreputils.a $(LIBINTL) ../lib/libgreputils.a $(LIBICONV) \
  $(LIBTHREAD)

grep_LDADD = $(LDADD) $(LIB_PCRE)
localedir = $(datadir)/locale
AM_CPPFLAGS = -I$(top_builddir)/lib -I$(top_srcdir)/lib

EXTRA_DIST = dosbuf.c egrep.sh

egrep fgrep: egrep.sh Makefile
	$(AM_V_GEN)grep=`echo grep | sed -e '$(transform)'`	&& \
	case $@ in egrep) option=-E;; fgrep) option=-F;; esac	&& \
	sed -e 's|[@]SHELL@|$(SHELL)|g' \
	    -e "s|[@]grep@|$$grep|g" \
	    -e "s|[@]option@|$$option|g" <$(srcdir)/egrep.sh >$@-t
	$(AM_V_at)chmod +x $@-t
	$(AM_V_at)mv $@-t $@

CLEANFILES = egrep fgrep *-t

--------------060207090305090503050301
Content-Type: text/x-csrc;
 name="charclass.c"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="charclass.c"

/* charclass -- Tools to create and manipulate sets of C "char"s

This module provides tools to create, modify, store and retrieve character
classes, and provides tools tuned to the needs of RE lexical analysers.

The class itself is an opaque type, referenced by a pointer while under
construction, and later by an unique index when finalised.  The module
tries aggressively to reuse existing finalised classes, rather than create
duplicates.  Functions are provided to map between indexes and pointers.
Because of the deduplication effort, the index reported for a class upon
finalisation may map to a different pointer than the one supplied by new ().

Classes may be shared between different lexer instances, although, at the
time of writing (10 April 2014) it is not thread-safe.  In many cases,
there might only be one class under construction at any time, with the
effort either finalised or abandoned quickly.  However, this module
recognises that sometimes multiple classes might be worked on in parallel,
and so explicitly marks each allocated class area as one of "unused",
"work" or "finalised".  This marking is done by an array of state bytes
dynamically allocated when the pool is created.

   Copyright (C) 1988, 1998, 2000, 2002, 2004-2005, 2007-2014 Free Software
   Foundation, Inc.

   This program is free software; you can redistribute it and/or modify
   it under the terms of the GNU General Public License as published by
   the Free Software Foundation; either version 3, or (at your option)
   any later version.

   This program is distributed in the hope that it will be useful,
   but WITHOUT ANY WARRANTY; without even the implied warranty of
   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
   GNU General Public License for more details.

   You should have received a copy of the GNU General Public License
   along with this program; if not, write to the Free Software
   Foundation, Inc.,
   51 Franklin Street - Fifth Floor, Boston, MA  02110-1301, USA */

/* Written June, 1988 by Mike Haertel
   Modified July, 1988 by Arthur David Olson to assist BMG speedups  */

/* 2014: Repackaged by "untangle" script, written by behoffski.  */

/* Always import environment-specific configuration items first.  */
#include <config.h>

#include <assert.h>
#include "charclass.h"
#include <limits.h>
#include <stdbool.h>
#include <stddef.h>
#include <stdlib.h>
#include <stdio.h> /* for EOF assert test.  */
#include <string.h>
#include <wchar.h> /* for WEOF assert test.  */
#include "xalloc.h"

/* Lower bound for size of first pool in the list.  */
/* ?? Set to 2 for pool debug;  Use 10 in production?  */
#define POOL_MINIMUM_INITIAL_SIZE          10

#ifndef MAX
# define MAX(a,b) ((a) > (b) ? (a) : (b))
#endif

#ifndef MIN
# define MIN(a,b) ((a) < (b) ? (a) : (b))
#endif

/* We maintain a list-of-pools here, choosing to malloc a new slab of
   memory each time we run out, instead of a realloc strategy.  This is so
   that we can provide a guarantee to the user that any class pointer issued
   remains valid for the lifetime of the module.  */

typedef ptrdiff_t pool_list_index_t;

/* Designator for each charclass in each pool.  Note that enums are ints by
   default, but we use a single unsigned char per class in our explicit
   memory allocation.  */
typedef enum
{
  STATE_UNUSED = 0,
  STATE_WORKING = 1,
  STATE_FINALISED = 2
} charclass_state_t;

typedef struct pool_info_struct {
  charclass_index_t first_index;
  size_t alloc;      /* ?? Use pool_list_index_t type for these?  */
  size_t used;
  charclass_t *classes;

  /* Charclass designator byte array, one per item, allocated dynamically.  */
  unsigned char *class_state;
} pool_t;

static pool_list_index_t pool_list_used  = 0;
static pool_list_index_t pool_list_alloc = 0;
static pool_t *pool_list = NULL;

/* While the header only guarantees a 3-bit gutter at each end of each
   class, we use an entire integer (typically 32 bits) for the gutter,
   with 1 integer placed at the start of each pool, 1 integer as a
   shared gutter between each class, and 1 integer after the last
   class.  This is why there is "(*int) + 1" code after class memory
   alloation calls.  */

/* HPUX defines these as macros in sys/param.h.  */
#ifdef setbit
# undef setbit
#endif
#ifdef clrbit
# undef clrbit
#endif

/* Number of bits in an unsigned char.  */
#ifndef CHARBITS
# define CHARBITS 8
#endif

/* INTBITS need not be exact, just a lower bound.  */
#ifndef INTBITS
# define INTBITS (CHARBITS * sizeof (int))
#endif

/* First integer value that is greater than any character code.  */
#define NOTCHAR (1 << CHARBITS)

/* Number of ints required to hold a bit for every character.  */
#define CHARCLASS_INTS ((NOTCHAR + INTBITS - 1) / INTBITS)

/* Flesh out opaque charclass type given in the header  */
/* The gutter integer following the class member storage also serves as the
   gutter integer before the next class in the list.

   Note that since the "gutter" notion explicitly includes negative values,
   members need to be signed ints, not unsigned ints, so that arithmetic
   shift right can be used (e.g. -8 >> 8 == -1, not -8 / 256 == 0).  */

struct charclass_struct {
   int members[CHARCLASS_INTS];
   int gutter_following;
};

/* Define class bit operations: test, set and clear a bit.

   Grrrr.  I wanted to exploit arithmetic right shift to come up with a
   really cheap and neat way of reducing small negative bit values,
   especially if b == EOF ==-1, to an index of -1 that falls neatly
   into the gutter, but strict C conformance does not guarantee this.
   The code below handles the two most likely scenarios, but, as with
   anything that is undefined, this is playing with fire.  */

#if INT_MAX == 32767
# define INT_BITS_LOG2 4        /* log2(sizeof(int)) + log2(CHARBITS) */
#elif INT_MAX == 2147483647
# define INT_BITS_LOG2 5        /* log2(sizeof(int)) + log2(CHARBITS) */
#else
# error "Not implemented: Architectures with ints other than 16 or 32 bits"
#endif

#if ((~0 >> 1) < 0)
  /* Arithmetic shift right: Both signed and unsigned cases are ok.  */
# define ARITH_SHIFT_R_INT(b) ((b) >> INT_BITS_LOG2)
#else
  /* Avoid using right shift if b is negative.  The macro may evaluate b twice
     in some circumstances.  */
# define ARITH_SHIFT_R_INT(b) \
      (((b) < 0) ? -1 : ((b) >> INT_BITS_LOG2))
#endif

bool _GL_ATTRIBUTE_PURE
charclass_tstbit (int b, charclass_t const *ccl)
{
  return ccl->members[ARITH_SHIFT_R_INT(b)] >> b % INTBITS & 1;
}

void
charclass_setbit (int b, charclass_t *ccl)
{
  ccl->members[ARITH_SHIFT_R_INT(b)] |= 1U << b % INTBITS;
}

void
charclass_clrbit (int b, charclass_t *ccl)
{
  ccl->members[ARITH_SHIFT_R_INT(b)] &= ~(1U << b % INTBITS);
}

void
charclass_setbit_range (int start, int end, charclass_t *ccl)

{
  int bit;

  /* Do nothing if the range doesn't make sense.  */
  if (end < start)
    return;
  if (start >= NOTCHAR)
    return;

  /* Clip the range to be in the interval [-1..NOTCHAR - 1] */
  start = MAX(start, -1);
  end   = MAX(end,   -1);
  /* We know start is < NOTCHAR from the test above.  */
  end   = MIN(end,   NOTCHAR - 1);

  /* ?? Could check that ccl is a valid class, but not at present.  */

  /* Okay, loop through the range, bit-by-bit, setting members.  */
  for (bit = start; bit <= end; bit++)
    ccl->members[ARITH_SHIFT_R_INT(bit)] |= 1U << bit % INTBITS;
}

void
charclass_clrbit_range (int start, int end, charclass_t *ccl)

{
  int bit;

  /* Do nothing if the range doesn't make sense.  */
  if (end < start)
    return;
  if (start >= NOTCHAR)
    return;

  /* Clip the range to be in the interval [-1..NOTCHAR - 1] */
  start = MAX(start, -1);
  end   = MAX(end,   -1);
  /* We know start is < NOTCHAR from the test above.  */
  end   = MIN(end,   NOTCHAR - 1);

  /* ?? Could check that ccl is a valid class, but not at present.  */

  /* Okay, loop through the range, bit-by-bit, clearing members.  */
  for (bit = start; bit <= end; bit++)
    ccl->members[ARITH_SHIFT_R_INT(bit)] &= ~(1U << bit % INTBITS);
}

/* Define whole-set operations: Copy, clear, invert, compare and union  */

void
charclass_copyset (charclass_t const *src, charclass_t *dst)
{
  memcpy (dst->members, src->members, sizeof(src->members));
}

void
charclass_zeroset (charclass_t *ccl)
{
  memset (ccl->members, 0, sizeof(ccl->members));
}

void
charclass_notset (charclass_t *ccl)
{
  int i;

  for (i = 0; i < CHARCLASS_INTS; ++i)
    ccl->members[i] = ~ccl->members[i];
}

int _GL_ATTRIBUTE_PURE
charclass_equal (charclass_t const *ccl1, charclass_t const *ccl2)
{
  return memcmp (ccl1->members, ccl2->members,
       sizeof(ccl1->members)) == 0;
}

void
charclass_unionset (charclass_t const *src, charclass_t *dst)
{
  int i;

  for (i = 0; i < CHARCLASS_INTS; ++i)
    dst->members[i] |= src->members[i];
}

void
charclass_intersectset (charclass_t const *src, charclass_t *dst)
{
  int i;

  for (i = 0; i < CHARCLASS_INTS; ++i)
    dst->members[i] &= src->members[i];
}

/* #ifdef DEBUG */

/* Nybble (4bit)-to-char conversion array for little-bit-endian nybbles.  */
static const char *disp_nybble = "084c2a6e195d3b7f";

/* Return a static string describing a class (Note: not reentrant).  */
char *
charclass_describe (charclass_t const *ccl)
{
  /* The string should probably be less than 90 chars, but overcompensate
     for limited uncertainty introduced by the %p formatting operator.  */
  static char buf[256];
  char *p_buf = buf;
  int i;

  p_buf += sprintf (p_buf, "0x%08lx:", (unsigned long) ccl);
  for (i = 0; i < CHARCLASS_INTS; i += 2)
    {
      int j = ccl->members[i];
      *p_buf++ = ' ';
      *p_buf++ = disp_nybble[(j >>  0) & 0x0f];
      *p_buf++ = disp_nybble[(j >>  4) & 0x0f];
      *p_buf++ = disp_nybble[(j >>  8) & 0x0f];
      *p_buf++ = disp_nybble[(j >> 12) & 0x0f];
      *p_buf++ = disp_nybble[(j >> 16) & 0x0f];
      *p_buf++ = disp_nybble[(j >> 20) & 0x0f];
      *p_buf++ = disp_nybble[(j >> 24) & 0x0f];
      *p_buf++ = disp_nybble[(j >> 28) & 0x0f];

      j = ccl->members[i + 1];
      *p_buf++ = disp_nybble[(j >>  0) & 0x0f];
      *p_buf++ = disp_nybble[(j >>  4) & 0x0f];
      *p_buf++ = disp_nybble[(j >>  8) & 0x0f];
      *p_buf++ = disp_nybble[(j >> 12) & 0x0f];
      *p_buf++ = disp_nybble[(j >> 16) & 0x0f];
      *p_buf++ = disp_nybble[(j >> 20) & 0x0f];
      *p_buf++ = disp_nybble[(j >> 24) & 0x0f];
      *p_buf++ = disp_nybble[(j >> 28) & 0x0f];
    }
  *p_buf++ = '\0';
  return buf;
}

/* static */ void
debug_pools (const char *label, bool class_contents)
{
  pool_list_index_t pool_nr;
  size_t class_nr;

  printf ("\nPool %p debug(%s): [alloc, used: %ld %ld]\n",
          pool_list, label, pool_list_alloc, pool_list_used);
  for (pool_nr = 0; pool_nr < pool_list_used; pool_nr++)
    {
      pool_t *pool = &pool_list[pool_nr];
      printf (
" %3ld: first_index, alloc, used, classes: %4ld %3lu %3lu %p\n",
pool_nr, pool->first_index, pool->alloc, pool->used, pool->classes);
      printf ("     class_states: ");
      for (class_nr = 0; class_nr < pool->alloc; class_nr++)
        switch (pool->class_state[class_nr]) {
          case STATE_UNUSED:    putchar ('.'); break;
          case STATE_WORKING:   putchar ('w'); break;
          case STATE_FINALISED: putchar ('F'); break;
          default: printf ("?%02x", pool->class_state[class_nr]);
        }
      putchar ('\n');
    }

  /* If class contents requested, print them out as well.  */
  if (class_contents)
    for (pool_nr = 0; pool_nr < pool_list_used; pool_nr++)
      {
        pool_t *pool = &pool_list[pool_nr];
        for (class_nr = 0; class_nr < pool->used; class_nr++)
          printf ("%s\n",
                  charclass_describe (&pool->classes[class_nr]));
      }
}

/* #endif * DEBUG */

static pool_t *
add_new_pool (void)
{
  pool_t *prev, *pool;
  size_t pool_class_alloc;
  charclass_t *alloc_mem;

  /* If the pools list is full, use x2nrealloc to expand its size.  */
  if (pool_list_used == pool_list_alloc)
      pool_list = x2nrealloc (pool_list, &pool_list_alloc, sizeof (pool_t));

  /* Find the size of the last charclass pool in the (old) list.  Scale up
     the size so that malloc activity will decrease as the number of pools
     increases.  Also, add 1 here as we knock off 1 to use as a gutter
     later.  */
  prev = &pool_list[pool_list_used - 1];
  pool_class_alloc = (prev->alloc * 5 / 2) + 1;
  alloc_mem = XNMALLOC (pool_class_alloc, charclass_t);

  /* Set up the new pool, shifting the alloc pointer to create the gutter
     preceding the first class of the pool.  */
  pool = &pool_list[pool_list_used++];
  pool->classes = alloc_mem + 1;
  pool->first_index = prev->first_index + prev->alloc;
  pool->alloc = pool_class_alloc - 1;
  pool->used = 0;
  pool->class_state = xzalloc (pool->alloc);

  return pool;
}

charclass_t *
charclass_alloc (void)
{
  pool_list_index_t pool_nr;
  charclass_t *class;
  pool_t *pool = NULL;
  size_t class_nr;
  size_t class_last_nr;
  int *gutter_preceding;

  /* Locate a pool with unused entries (if any).  */
  for (pool_nr = 0; pool_nr < pool_list_used; pool_nr++)
    {
      pool = &pool_list[pool_nr];

      /* Try use the earliest pool possible, first by filling in a hole
         from a withdrawn class, or by grabbing an unused class from the
         end of the list.  */
      class_last_nr = MIN(pool->used + 1, pool->alloc);
      for (class_nr = 0; class_nr < class_last_nr; class_nr++)
       {
          if (pool->class_state[class_nr] == STATE_UNUSED)
            goto found_pool_and_class;
       }
    }

  /* No space found, so prepare a new pool and make this class its first
     element.  */
  pool = add_new_pool ();
  class_nr = 0;
  /* FALLTHROUGH */

found_pool_and_class:
  /* Mark the found class state as working, zero its elements, and return
       class pointer to caller.  Zeroing is needed as this class may have
       been previously worked on, but then abandoned or withdrawn.  */
  pool->class_state[class_nr] = STATE_WORKING;
  if (class_nr >= pool->used)
    pool->used = class_nr + 1;
  class = &pool->classes[class_nr];

  /* Zero out the class' members, and also the gutters on each side.  */
  memset (class, 0, sizeof (*class));
  gutter_preceding = ((int *) class) - 1;
  *gutter_preceding = 0;

  return class;
}

pool_t * _GL_ATTRIBUTE_PURE
find_class_pool (charclass_t const *ccl)
{
  pool_list_index_t pool_nr;
  pool_t *pool = NULL;
  ptrdiff_t class_ptr_offset;

  /* Locate the pool whose memory address space covers this class.  */
  /* ?? Perhaps check &pool->classes[pool->alloc] in this first loop, and
     then check that the index is in the "used" portion later, so we can
     diagnose malformed pointers more exactly.  */
  for (pool_nr = 0; pool_nr < pool_list_used; pool_nr++)
    {
      pool = &pool_list[pool_nr];
      if ((pool->classes <= ccl) && (ccl < &pool->classes[pool->alloc]))
        goto found_pool;
    }

  /* No credible pool candidate was found.  */
  assert ("find_class_pool: no pool found");
  return NULL;

found_pool:
  /* Make sure the class clearly lies on an array boundary within the pool's
     memory allocation.  */
  class_ptr_offset = (char *) ccl - (char *) pool->classes;
  if ((class_ptr_offset % sizeof (charclass_t)) != 0)
    {
      /* Pointer does not lie at the start of a pool member.  */
      assert ("find_class_pool: pointer not aligned.");
      return NULL;
    }

  return pool;
}

static void
withdraw_class (charclass_t *ccl, pool_t *class_pool)
{
  pool_t *pool;
  size_t class_nr;
  int *gutter_preceding;

  /* Use pool reference if given, otherwise work back from the class pointer
     to find the associated pool.  */
  pool = (class_pool != NULL) ? class_pool : find_class_pool (ccl);

  if (pool == NULL)
    assert (!"Could not locate a pool for this charclass");

  /* Zero out the gutters each side of the class.  */
  ccl->gutter_following = 0;
  gutter_preceding = ((int *) ccl) - 1;
  *gutter_preceding = 0;

  /* Work out the class index in the pool.  */
  class_nr = ccl - pool->classes;
  pool->class_state[class_nr] = STATE_UNUSED;

  /* Is this the last item within the pool's class list? */
  if (class_nr == pool->used - 1)
    {
      /* Yes, reduce the pool member count by 1.  */
      pool->used--;
      return;
    }
}

/* Finish off creating a class, and report an index that can be used
   to reference the class.  */
charclass_index_t
charclass_finalise (charclass_t *ccl)
{
  int *gutter_preceding;
  pool_list_index_t pool_nr;
  pool_t *pool;
  charclass_t *found = NULL;
  size_t class_nr;
  pool_t *my_pool = NULL;
  size_t my_class_nr = 0;

  /* Search all pools for a finalised class matching this class, and, if found,
     use it in preference to the new one.  While searching, also record where
     the work class is located.  If we can't find ourselves, the pointer is
     invalid, and throw an assertion.   */
  for (pool_nr = 0; pool_nr < pool_list_used; pool_nr++)
    {
      pool = &pool_list[pool_nr];
      for (class_nr = 0; class_nr < pool->used; class_nr++)
        {
          charclass_t *search = &pool->classes[class_nr];
          /* Have we found ourselves in the list? */
          if (search == ccl)
            {
              /* Yes, remember this place in case no duplicate is found.  */
              my_pool = pool;
              my_class_nr = class_nr;
          }
          if (pool->class_state[class_nr] != STATE_FINALISED)
            continue;
          if (charclass_equal (search, ccl))
            {
              /* Another class, finalised, matches:  Use it in preference to
                 potentially creating a duplicate.  */
              withdraw_class (ccl, my_pool);
              found = search;
              goto found_matching_class;
            }
        }
    }

  /* No duplicate found... but make sure the search pointer is known. */
  assert (my_pool != NULL);
  assert (my_pool->class_state[my_class_nr] == STATE_WORKING);

  /* Prepare to convert the search (work) class into a finalised class.  */
  pool = my_pool;
  class_nr = my_class_nr;
  found = &pool->classes[class_nr];
  /* FALLTHROUGH */

found_matching_class:
  /* Clear out the gutter integers each side of the class entry.  */
  gutter_preceding = found->members - 1;
  *gutter_preceding = 0;
  found->gutter_following = 0;
  pool->class_state[class_nr] = STATE_FINALISED;

  /* Return the index of the class.  */
  return pool->first_index + class_nr;
}

void
charclass_abandon (charclass_t *ccl)
{
  withdraw_class (ccl, NULL);
}

/* Additional functions to help clients work with classes.  */

charclass_t * _GL_ATTRIBUTE_PURE
charclass_get_pointer (charclass_index_t const index)
{
  pool_list_index_t pool_nr;
  pool_t *pool;

  /* Does this class match any class we've seen previously? */
  for (pool_nr = 0; pool_nr < pool_list_used; pool_nr++)
    {
      /* Is the index inside this pool? */
      pool = &pool_list[pool_nr];
      if (pool->first_index <= index
              && index < (pool->first_index + pool->used))
        {
          /* Yes, find the pointer within the pool and return it.  */
          return &pool->classes[index - pool->first_index];
        }
    }

  /* The mapping above should never fail; we could return NULL, but we
     choose to abort instead.  */
  assert (!"index-to-charclass mapping failed");
  return NULL;
}

charclass_index_t _GL_ATTRIBUTE_PURE
charclass_get_index (charclass_t const *ccl)
{
  pool_t *pool;

  /* This code is similar to charclass_finalise... perhaps merge? */
  pool = find_class_pool (ccl);
  if (pool == NULL)
    return -1;

  /* Report the index to the caller.  */
  return pool->first_index + (ccl - pool->classes);
}

/* Functions to initialise module on startup, and to shut down and
   release acquired resources at exit.  */

void
charclass_initialise (size_t initial_pool_size)
{
  size_t initial_alloc;
  charclass_t *alloc_mem;
  pool_t *pool;
  charclass_t *ccl;
  charclass_index_t zeroclass_index;

  /* Usually EOF = WEOF = -1, but the standard merely states that they must
     be a negative integer.  We test for -1 here as it's a prime target for
     a "permitted" gutter value, and different values might be a problem.  */
  assert (EOF == -1);
  assert (WEOF == -1);

  /* First, set up the list-of-pools structure with initial storage.  */
  pool_list_alloc = 4;
  pool_list = (pool_t *) xnmalloc (pool_list_alloc, sizeof (pool_t));

  /* If initial pool size is small, inflate it here as we prefer to waste
     a little memory, rather than issue many calls to xmalloc ().  This
     minimum also ensures that our double-up pool size strategy has a sane
     starting point.  */
  initial_alloc = MAX(initial_pool_size, POOL_MINIMUM_INITIAL_SIZE);

  /* Set up the first pool using our chosen first alloc size.  Allocate an
     extra class, and offset the pool by this amount, in order to accommodate
     the initial gutter integer.  (Note for the future:  If charclass
     alignment becomes significant, then sizeof (charclass) and this offset
     may need to be changed, perhaps for SIMD instructions.)  */
  pool_list_used = 1;
  pool = &pool_list[0];
  pool->first_index = 0;
  pool->alloc = initial_alloc;
  pool->used = 0;
  alloc_mem = XNMALLOC (pool->alloc + 1, charclass_t);
  pool->classes = alloc_mem + 1;
  pool->class_state = xzalloc (pool->alloc);

  /* Enforce the all-zeroes class to be the first class.  This is needed as
     "abandon" may leave a hole in a pool in some cases, and in these cases
     we need to ensure that no-one else picks it up by accident (as this
     would invalidate the guarantee that the module eliminates all
     duplicates, from the point of view of the user).  So, we set the first
     class to all-zeroes, and also zero out abandoned classes where a hole
     is unavoidable.  */
  ccl = charclass_alloc (); /* Alloc delivers an all-zeroes class.  */
  zeroclass_index = charclass_finalise (ccl);
  assert (zeroclass_index == 0);

/* debug_pools ("add_new_pool: zeroclass added"); */

}

void
charclass_destroy (void)
{
  int i;
  int *alloc_mem;

  /* First, discard the charclass memory associated with each pool,
     including catering for the offset used upon creation.  */
  for (i = 0; i < pool_list_used; i++)
    {
      alloc_mem = (int *) pool_list[i].classes;
      free (alloc_mem - 1);
    }

  /* Second, free up the pool list itself.  */
  free (pool_list);
}

/* vim:set shiftwidth=2: */

--------------060207090305090503050301
Content-Type: text/x-chdr;
 name="charclass.h"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="charclass.h"

/* charclass -- Tools to create and manipulate sets of characters (octets)

   Copyright (C) 1988, 1998, 2000, 2002, 2004-2005, 2007-2014 Free Software
   Foundation, Inc.

   This program is free software; you can redistribute it and/or modify
   it under the terms of the GNU General Public License as published by
   the Free Software Foundation; either version 3, or (at your option)
   any later version.

   This program is distributed in the hope that it will be useful,
   but WITHOUT ANY WARRANTY; without even the implied warranty of
   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
   GNU General Public License for more details.

   You should have received a copy of the GNU General Public License
   along with this program; if not, write to the Free Software
   Foundation, Inc.,
   51 Franklin Street - Fifth Floor, Boston, MA  02110-1301, USA */

/* Written June, 1988 by Mike Haertel
   Modified July, 1988 by Arthur David Olson to assist BMG speedups  */

/* 2014: Repackaged by "untangle" script, written by behoffski.  */

/* This module provides services to allocate, manipulate, consolidate and
   discard 256-bit vectors, used to describe 8-bit (octet) sets.  Octet
   is used as the member name here, as "byte" or "char" can sometimes
   refer to different bit sizes (e.g. char -> 6 bits on some IBM/Cyber
   architectures; char -> 32 bits on some DSP architectures; in C,
   sizeof (char) == 1 by definition on all architectures).

   The connection between these "charclass" sets and set expression by
   RE tools can be non-trivial:  Many Unicode characters cannot fit into
   8 bits, and even where octet-based code pages are used, nontrivial
   cases can appear (e.g. Code page 857, MS-DOS Turkish, which has both
   a dotted and a dotless lowercase and uppercase "I").

   On the storage side, things are slightly tricky and perhaps even murky
   at times.  The client starts by allocating a charclass, working on it,
   and then either finalising it (usually) or abandoning it.  The working
   class (pun intended) is represented by a pointer.  If not abandoned,
   this pointer is guaranteed to remain valid for the lifetime of the module.

   The module tries aggressively to eliminate duplicates; this is perhaps the
   main function of the finalise step.  So, the pointer that represents the
   class after finalise may not be the working pointer.

   In addition to the pointer method of referring to a class, the classes
   can be viewed as an array, with the first class receiving index 0, the
   second receiving index 1, and so on.  Functions are provided to map
   pointers to indexes, and vice versa.  The index representation is handy
   as it is very compact (typically much fewer than 24 bits), whereas
   pointers are architecture and OS-specific, and may be 64 bits or more.

   Index 0 is special; it will always represent the zero-class (no members
   set).  Users wanting to store a set of non-zeroclass classes (e.g. utf8)
   can use this property as a sentinel (a value of 0 for a static variable
   can mean "not initialised").

   Finally, there are some "gutter" bits, at least 3 on each end of the
   class, so that, to a limited extent (and especially for the common case
   of EOF == -1), bits can be set and cleared without causing problems,
   and the code does not need to include the overhead of checks for
   out-of-bound bit numbers.  These gutter bits are cleared when the class
   is finalised, so EOF (for instance) should never be member of a class.  */


#ifndef CHARCLASS_H
#define CHARCLASS_H 1

/* Always import environment-specific configuration items first. */
#include <config.h>

#include <stdbool.h>
#include <stddef.h>

/* Define charclass as an opaque type.  */
typedef struct charclass_struct charclass_t;

/* Indices to valid charclasses are always positive, but -1 can be used
   as a sentinel in some places.  */
typedef ptrdiff_t charclass_index_t;

/* Entire-module initialisation and destruction functions.  The client
   specifies starting size for the class pool.  Destroy releases all
   resources acquired by this module.  */

extern void
charclass_initialise (size_t initial_pool_size);

extern void
charclass_destroy (void);

/* Single-bit operations (test, set, clear).  */

extern bool _GL_ATTRIBUTE_PURE
charclass_tstbit (int b, charclass_t const *ccl);

extern void
charclass_setbit (int b, charclass_t *ccl);

extern void
charclass_clrbit (int b, charclass_t *ccl);

/* Range-of-bits set and clear operations.  These are easier to read, and
   also more efficient, than multiple single-bit calls.  */

extern void
charclass_setbit_range (int start, int end, charclass_t *ccl);

extern void
charclass_clrbit_range (int start, int end, charclass_t *ccl);

/* Whole-of-set operations (copy, zero, invert, compare-equal).  */

extern void
charclass_copyset (charclass_t const *src, charclass_t *dst);

extern void
charclass_zeroset (charclass_t *ccl);

extern void
charclass_notset (charclass_t *ccl);

extern int _GL_ATTRIBUTE_PURE
charclass_equal (charclass_t const *ccl1, charclass_t const *ccl2);

/* Add "unionset" and "intersectset" functions since whole-of-class
   operations tend to be reasonably expressive and self-documenting.
   In both cases, the source modifies the destination; ORed in, in the
   case of unionset; ANDed in, in the case of intersectset.  */
extern void
charclass_unionset (charclass_t const *src, charclass_t *dst);

extern void
charclass_intersectset (charclass_t const *src, charclass_t *dst);

/* Functions to allocate, finalise and abandon charclasses.  Note that
   the module aggressively tries to reuse existing finalised classes
   rather than create new ones.  The module returns an unique index
   that can be used to reference the module; this index supercedes the
   pointer used during the work phase (if index_to_pointer is called, a
   different class may be returned).

   The aggressive-reuse policy also means that finalised classes must
   not undergo further modification.

   Allocating and then abandoning classes is useful where an operation
   requires temporary classes for a while, but these do not need to be
   maintained once the work is complete.  */

extern charclass_t *
charclass_alloc (void);

extern charclass_index_t
charclass_finalise (charclass_t *ccl);

extern void
charclass_abandon (charclass_t *ccl);

/* Functions to map between pointer references and index references for
   a charclass.  As explained above, the index is convenient as it is
   typically an array reference, and is usually not much larger than the
   number of classes that have been allocated.  */

extern charclass_t * _GL_ATTRIBUTE_PURE
charclass_get_pointer (charclass_index_t const index);

extern charclass_index_t _GL_ATTRIBUTE_PURE
charclass_get_index (charclass_t const *ccl);

/* Return a static string describing a class (Note: not reentrant).  */
extern char *
charclass_describe (charclass_t const *ccl);

#endif /* CHARCLASS_H */

/* vim:set shiftwidth=2: */

--------------060207090305090503050301
Content-Type: text/x-csrc;
 name="dfa-prl.c"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="dfa-prl.c"

/* dfa.c - deterministic extended regexp routines for GNU
   Copyright (C) 1988, 1998, 2000, 2002, 2004-2005, 2007-2014 Free Software
   Foundation, Inc.

   This program is free software; you can redistribute it and/or modify
   it under the terms of the GNU General Public License as published by
   the Free Software Foundation; either version 3, or (at your option)
   any later version.

   This program is distributed in the hope that it will be useful,
   but WITHOUT ANY WARRANTY; without even the implied warranty of
   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
   GNU General Public License for more details.

   You should have received a copy of the GNU General Public License
   along with this program; if not, write to the Free Software
   Foundation, Inc.,
   51 Franklin Street - Fifth Floor, Boston, MA  02110-1301, USA */

/* Written June, 1988 by Mike Haertel
   Modified July, 1988 by Arthur David Olson to assist BMG speedups  */

#include <config.h>

#include "dfa.h"

#include <assert.h>
#include <ctype.h>
#include <stdio.h>
#include <stddef.h>
#include <stdlib.h>
#include <limits.h>
#include <string.h>
#include <locale.h>
#include <stdbool.h>

/* HOOK: Hack in interfaces to new charclass and fsa* modules.  */
#include "charclass.h"
#include "fsatoken.h"
#include "fsalex.h"
#include "fsamusts.h"
#include "fsaparse.h"
#include "proto-lexparse.h"

/* HOOK: File handle for parallel lex/parse debug/log messages */
FILE *pll_log = NULL;

/* HOOK: Static variables to hold opaque parser and lexer contexts.  */
static fsaparse_ctxt_t *parser = NULL;
static fsalex_ctxt_t *lexer = NULL;

static void
HOOK_set_up_fsa_stuff_if_not_done_already (void)
{
  /* If lexer context is present, this function has been run previously.  */
  if (lexer != NULL)
    return;

  /* Create a new lexer instance, and give it error/warning fns  */
  lexer = fsalex_new ();
  fsalex_exception_fns (lexer, dfawarn, dfaerror);

  /* Start with a pool of 10 charclasses.  */
  charclass_initialise (10);

  /* Create a new parser instance, give it error/warning functions,
     and also provide a hook to the lexer.   */
  parser = fsaparse_new ();
  fsaparse_exception_fns (parser, dfawarn, dfaerror);
  fsaparse_lexer (parser, lexer,
                  (proto_lexparse_lex_fn_t *) fsalex_lex,
                  (proto_lexparse_exchange_fn_t *) fsalex_exchange);
}

#define STREQ(a, b) (strcmp (a, b) == 0)

/* ISASCIIDIGIT differs from isdigit, as follows:
   - Its arg may be any int or unsigned int; it need not be an unsigned char.
   - It's guaranteed to evaluate its argument exactly once.
   - It's typically faster.
   Posix 1003.2-1992 section 2.5.2.1 page 50 lines 1556-1558 says that
   only '0' through '9' are digits.  Prefer ISASCIIDIGIT to isdigit unless
   it's important to use the locale's definition of "digit" even when the
   host does not conform to Posix.  */
#define ISASCIIDIGIT(c) ((unsigned) (c) - '0' <= 9)

/* gettext.h ensures that we don't use gettext if ENABLE_NLS is not defined */
#include "gettext.h"
#define _(str) gettext (str)

#include <wchar.h>
#include <wctype.h>

#include "xalloc.h"

/* HPUX defines these as macros in sys/param.h.  */
#ifdef setbit
# undef setbit
#endif
#ifdef clrbit
# undef clrbit
#endif

/* Number of bits in an unsigned char.  */
#ifndef CHARBITS
# define CHARBITS 8
#endif

/* First integer value that is greater than any character code.  */
#define NOTCHAR (1 << CHARBITS)

/* INTBITS need not be exact, just a lower bound.  */
#ifndef INTBITS
# define INTBITS (CHARBITS * sizeof (int))
#endif

/* Number of ints required to hold a bit for every character.  */
#define CHARCLASS_INTS ((NOTCHAR + INTBITS - 1) / INTBITS)

/* Sets of unsigned characters are stored as bit vectors in arrays of ints.  */
typedef unsigned int charclass[CHARCLASS_INTS];

/* Convert a possibly-signed character to an unsigned character.  This is
   a bit safer than casting to unsigned char, since it catches some type
   errors that the cast doesn't.  */
static unsigned char
to_uchar (char ch)
{
  return ch;
}

/* Contexts tell us whether a character is a newline or a word constituent.
   Word-constituent characters are those that satisfy iswalnum, plus '_'.
   Each character has a single CTX_* value; bitmasks of CTX_* values denote
   a particular character class.

   A state also stores a context value, which is a bitmask of CTX_* values.
   A state's context represents a set of characters that the state's
   predecessors must match.  For example, a state whose context does not
   include CTX_LETTER will never have transitions where the previous
   character is a word constituent.  A state whose context is CTX_ANY
   might have transitions from any character.  */

#define CTX_NONE	1
#define CTX_LETTER	2
#define CTX_NEWLINE	4
#define CTX_ANY		7

/* Sometimes characters can only be matched depending on the surrounding
   context.  Such context decisions depend on what the previous character
   was, and the value of the current (lookahead) character.  Context
   dependent constraints are encoded as 8 bit integers.  Each bit that
   is set indicates that the constraint succeeds in the corresponding
   context.

   bit 8-11 - valid contexts when next character is CTX_NEWLINE
   bit 4-7  - valid contexts when next character is CTX_LETTER
   bit 0-3  - valid contexts when next character is CTX_NONE

   The macro SUCCEEDS_IN_CONTEXT determines whether a given constraint
   succeeds in a particular context.  Prev is a bitmask of possible
   context values for the previous character, curr is the (single-bit)
   context value for the lookahead character.  */
#define NEWLINE_CONSTRAINT(constraint) (((constraint) >> 8) & 0xf)
#define LETTER_CONSTRAINT(constraint)  (((constraint) >> 4) & 0xf)
#define OTHER_CONSTRAINT(constraint)    ((constraint)       & 0xf)

#define SUCCEEDS_IN_CONTEXT(constraint, prev, curr) \
  ((((curr) & CTX_NONE      ? OTHER_CONSTRAINT (constraint) : 0) \
    | ((curr) & CTX_LETTER  ? LETTER_CONSTRAINT (constraint) : 0) \
    | ((curr) & CTX_NEWLINE ? NEWLINE_CONSTRAINT (constraint) : 0)) & (prev))

/* The following macros describe what a constraint depends on.  */
#define PREV_NEWLINE_CONSTRAINT(constraint) (((constraint) >> 2) & 0x111)
#define PREV_LETTER_CONSTRAINT(constraint)  (((constraint) >> 1) & 0x111)
#define PREV_OTHER_CONSTRAINT(constraint)    ((constraint)       & 0x111)

#define PREV_NEWLINE_DEPENDENT(constraint) \
  (PREV_NEWLINE_CONSTRAINT (constraint) != PREV_OTHER_CONSTRAINT (constraint))
#define PREV_LETTER_DEPENDENT(constraint) \
  (PREV_LETTER_CONSTRAINT (constraint) != PREV_OTHER_CONSTRAINT (constraint))

/* Tokens that match the empty string subject to some constraint actually
   work by applying that constraint to determine what may follow them,
   taking into account what has gone before.  The following values are
   the constraints corresponding to the special tokens previously defined.  */
#define NO_CONSTRAINT         0x777
#define BEGLINE_CONSTRAINT    0x444
#define ENDLINE_CONSTRAINT    0x700
#define BEGWORD_CONSTRAINT    0x050
#define ENDWORD_CONSTRAINT    0x202
#define LIMWORD_CONSTRAINT    0x252
#define NOTLIMWORD_CONSTRAINT 0x525

/* The regexp is parsed into an array of tokens in postfix form.  Some tokens
   are operators and others are terminal symbols.  Most (but not all) of these
   codes are returned by the lexical analyzer.  */

typedef ptrdiff_t token;

/* Predefined token values.  */
enum
{
  END = -1,                     /* END is a terminal symbol that matches the
                                   end of input; any value of END or less in
                                   the parse tree is such a symbol.  Accepting
                                   states of the DFA are those that would have
                                   a transition on END.  */

  /* Ordinary character values are terminal symbols that match themselves.  */

  EMPTY = NOTCHAR,              /* EMPTY is a terminal symbol that matches
                                   the empty string.  */

  BACKREF,                      /* BACKREF is generated by \<digit>
                                   or by any other construct that
                                   is not completely handled.  If the scanner
                                   detects a transition on backref, it returns
                                   a kind of "semi-success" indicating that
                                   the match will have to be verified with
                                   a backtracking matcher.  */

  BEGLINE,                      /* BEGLINE is a terminal symbol that matches
                                   the empty string if it is at the beginning
                                   of a line.  */

  ENDLINE,                      /* ENDLINE is a terminal symbol that matches
                                   the empty string if it is at the end of
                                   a line.  */

  BEGWORD,                      /* BEGWORD is a terminal symbol that matches
                                   the empty string if it is at the beginning
                                   of a word.  */

  ENDWORD,                      /* ENDWORD is a terminal symbol that matches
                                   the empty string if it is at the end of
                                   a word.  */

  LIMWORD,                      /* LIMWORD is a terminal symbol that matches
                                   the empty string if it is at the beginning
                                   or the end of a word.  */

  NOTLIMWORD,                   /* NOTLIMWORD is a terminal symbol that
                                   matches the empty string if it is not at
                                   the beginning or end of a word.  */

  QMARK,                        /* QMARK is an operator of one argument that
                                   matches zero or one occurrences of its
                                   argument.  */

  STAR,                         /* STAR is an operator of one argument that
                                   matches the Kleene closure (zero or more
                                   occurrences) of its argument.  */

  PLUS,                         /* PLUS is an operator of one argument that
                                   matches the positive closure (one or more
                                   occurrences) of its argument.  */

  REPMN,                        /* REPMN is a lexical token corresponding
                                   to the {m,n} construct.  REPMN never
                                   appears in the compiled token vector.  */

  CAT,                          /* CAT is an operator of two arguments that
                                   matches the concatenation of its
                                   arguments.  CAT is never returned by the
                                   lexical analyzer.  */

  OR,                           /* OR is an operator of two arguments that
                                   matches either of its arguments.  */

  LPAREN,                       /* LPAREN never appears in the parse tree,
                                   it is only a lexeme.  */

  RPAREN,                       /* RPAREN never appears in the parse tree.  */

  ANYCHAR,                      /* ANYCHAR is a terminal symbol that matches
                                   a valid multibyte (or single byte) character.
                                   It is used only if MB_CUR_MAX > 1.  */

  MBCSET,                       /* MBCSET is similar to CSET, but for
                                   multibyte characters.  */

  WCHAR,                        /* Only returned by lex.  wctok contains
                                   the wide character representation.  */

  CSET                          /* CSET and (and any value greater) is a
                                   terminal symbol that matches any of a
                                   class of characters.  */
};


/* States of the recognizer correspond to sets of positions in the parse
   tree, together with the constraints under which they may be matched.
   So a position is encoded as an index into the parse tree together with
   a constraint.  */
typedef struct
{
  size_t index;                 /* Index into the parse array.  */
  unsigned int constraint;      /* Constraint for matching this position.  */
} position;

/* Sets of positions are stored as arrays.  */
typedef struct
{
  position *elems;              /* Elements of this position set.  */
  size_t nelem;                 /* Number of elements in this set.  */
  size_t alloc;                 /* Number of elements allocated in ELEMS.  */
} position_set;

/* Sets of leaves are also stored as arrays.  */
typedef struct
{
  size_t *elems;                /* Elements of this position set.  */
  size_t nelem;                 /* Number of elements in this set.  */
} leaf_set;

/* A state of the dfa consists of a set of positions, some flags,
   and the token value of the lowest-numbered position of the state that
   contains an END token.  */
typedef struct
{
  size_t hash;                  /* Hash of the positions of this state.  */
  position_set elems;           /* Positions this state could match.  */
  unsigned char context;        /* Context from previous state.  */
  bool has_backref;             /* True if this state matches a \<digit>.  */
  bool has_mbcset;              /* True if this state matches a MBCSET.  */
  unsigned short constraint;    /* Constraint for this state to accept.  */
  token first_end;              /* Token value of the first END in elems.  */
  position_set mbps;            /* Positions which can match multibyte
                                   characters, e.g., period.
                                   Used only if MB_CUR_MAX > 1.  */
} dfa_state;

/* States are indexed by state_num values.  These are normally
   nonnegative but -1 is used as a special value.  */
typedef ptrdiff_t state_num;

/* A bracket operator.
   e.g., [a-c], [[:alpha:]], etc.  */
struct mb_char_classes
{
  ptrdiff_t cset;
  bool invert;
  wchar_t *chars;               /* Normal characters.  */
  size_t nchars;
  wctype_t *ch_classes;         /* Character classes.  */
  size_t nch_classes;
  wchar_t *range_sts;           /* Range characters (start of the range).  */
  wchar_t *range_ends;          /* Range characters (end of the range).  */
  size_t nranges;
  char **equivs;                /* Equivalence classes.  */
  size_t nequivs;
  char **coll_elems;
  size_t ncoll_elems;           /* Collating elements.  */
};

/* A compiled regular expression.  */
struct dfa
{
  /* Fields filled by the scanner.  */
  charclass *charclasses;       /* Array of character sets for CSET tokens.  */
  size_t cindex;                /* Index for adding new charclasses.  */
  size_t calloc;                /* Number of charclasses allocated.  */

  /* Fields filled by the parser.  */
  token *tokens;                /* Postfix parse array.  */
  size_t tindex;                /* Index for adding new tokens.  */
  size_t talloc;                /* Number of tokens currently allocated.  */
  size_t depth;                 /* Depth required of an evaluation stack
                                   used for depth-first traversal of the
                                   parse tree.  */
  size_t nleaves;               /* Number of leaves on the parse tree.  */
  size_t nregexps;              /* Count of parallel regexps being built
                                   with dfaparse.  */
  unsigned int mb_cur_max;      /* Cached value of MB_CUR_MAX.  */
  token utf8_anychar_classes[5]; /* To lower ANYCHAR in UTF-8 locales.  */

  /* The following are used only if MB_CUR_MAX > 1.  */

  /* The value of multibyte_prop[i] is defined by following rule.
     if tokens[i] < NOTCHAR
     bit 0 : tokens[i] is the first byte of a character, including
     single-byte characters.
     bit 1 : tokens[i] is the last byte of a character, including
     single-byte characters.

     if tokens[i] = MBCSET
     ("the index of mbcsets corresponding to this operator" << 2) + 3

     e.g.
     tokens
     = 'single_byte_a', 'multi_byte_A', single_byte_b'
     = 'sb_a', 'mb_A(1st byte)', 'mb_A(2nd byte)', 'mb_A(3rd byte)', 'sb_b'
     multibyte_prop
     = 3     , 1               ,  0              ,  2              , 3
   */
  size_t nmultibyte_prop;
  int *multibyte_prop;

  /* A table indexed by byte values that contains the corresponding wide
     character (if any) for that byte.  WEOF means the byte is the
     leading byte of a multibyte character.  Invalid and null bytes are
     mapped to themselves.  */
  wint_t mbrtowc_cache[NOTCHAR];

  /* Array of the bracket expression in the DFA.  */
  struct mb_char_classes *mbcsets;
  size_t nmbcsets;
  size_t mbcsets_alloc;

  /* Fields filled by the superset.  */
  struct dfa *superset;             /* Hint of the dfa.  */

  /* Fields filled by the state builder.  */
  dfa_state *states;            /* States of the dfa.  */
  state_num sindex;             /* Index for adding new states.  */
  state_num salloc;             /* Number of states currently allocated.  */

  /* Fields filled by the parse tree->NFA conversion.  */
  position_set *follows;        /* Array of follow sets, indexed by position
                                   index.  The follow of a position is the set
                                   of positions containing characters that
                                   could conceivably follow a character
                                   matching the given position in a string
                                   matching the regexp.  Allocated to the
                                   maximum possible position index.  */
  bool searchflag;              /* True if we are supposed to build a searching
                                   as opposed to an exact matcher.  A searching
                                   matcher finds the first and shortest string
                                   matching a regexp anywhere in the buffer,
                                   whereas an exact matcher finds the longest
                                   string matching, but anchored to the
                                   beginning of the buffer.  */

  /* Fields filled by dfaexec.  */
  state_num tralloc;            /* Number of transition tables that have
                                   slots so far.  */
  int trcount;                  /* Number of transition tables that have
                                   actually been built.  */
  state_num **trans;            /* Transition tables for states that can
                                   never accept.  If the transitions for a
                                   state have not yet been computed, or the
                                   state could possibly accept, its entry in
                                   this table is NULL.  */
  state_num **realtrans;        /* Trans always points to realtrans + 1; this
                                   is so trans[-1] can contain NULL.  */
  state_num **fails;            /* Transition tables after failing to accept
                                   on a state that potentially could do so.  */
  int *success;                 /* Table of acceptance conditions used in
                                   dfaexec and computed in build_state.  */
  state_num *newlines;          /* Transitions on newlines.  The entry for a
                                   newline in any transition table is always
                                   -1 so we can count lines without wasting
                                   too many cycles.  The transition for a
                                   newline is stored separately and handled
                                   as a special case.  Newline is also used
                                   as a sentinel at the end of the buffer.  */
  struct dfamust *musts;        /* List of strings, at least one of which
                                   is known to appear in any r.e. matching
                                   the dfa.  */
  unsigned char *mblen_buf;     /* Correspond to the input buffer in dfaexec.
                                   Each element stores the number of remaining
                                   bytes of the corresponding multibyte
                                   character in the input string.  A element's
                                   value is 0 if the corresponding character is
                                   single-byte.
                                   e.g., input : 'a', <mb(0)>, <mb(1)>, <mb(2)>
                                   mblen_buf   :  0,       3,       2,       1
                                 */
  size_t nmblen_buf;            /* Allocated size of mblen_buf.  */
  wchar_t *inputwcs;            /* Wide character representation of the input
                                   string in dfaexec.
                                   The length of this array is the same as
                                   the length of input string (char array).
                                   inputstring[i] is a single-byte char,
                                   or the first byte of a multibyte char;
                                   inputwcs[i] is the codepoint.  */
  size_t ninputwcs;             /* Allocated number of inputwcs elements.  */
  position_set *mb_follows;     /* Follow set added by ANYCHAR and/or MBCSET
                                   on demand.  */
  int *mb_match_lens;           /* Array of length reduced by ANYCHAR and/or
                                   MBCSET.  */
};

/* Some macros for user access to dfa internals.  */

/* ACCEPTING returns true if s could possibly be an accepting state of r.  */
#define ACCEPTING(s, r) ((r).states[s].constraint)

/* ACCEPTS_IN_CONTEXT returns true if the given state accepts in the
   specified context.  */
#define ACCEPTS_IN_CONTEXT(prev, curr, state, dfa) \
  SUCCEEDS_IN_CONTEXT ((dfa).states[state].constraint, prev, curr)

static void dfamust (struct dfa *dfa);
static void regexp (void);

/* These two macros are identical to the ones in gnulib's xalloc.h,
   except that they do not cast the result to "(t *)", and thus may
   be used via type-free CALLOC and MALLOC macros.  */
#undef XNMALLOC
#undef XCALLOC

/* Allocate memory for N elements of type T, with error checking.  */
/* extern t *XNMALLOC (size_t n, typename t); */
# define XNMALLOC(n, t) \
    (sizeof (t) == 1 ? xmalloc (n) : xnmalloc (n, sizeof (t)))

/* Allocate memory for N elements of type T, with error checking,
   and zero it.  */
/* extern t *XCALLOC (size_t n, typename t); */
# define XCALLOC(n, t) \
    (sizeof (t) == 1 ? xzalloc (n) : xcalloc (n, sizeof (t)))

#define CALLOC(p, n) do { (p) = XCALLOC (n, *(p)); } while (0)
#define MALLOC(p, n) do { (p) = XNMALLOC (n, *(p)); } while (0)
#define REALLOC(p, n) do {(p) = xnrealloc (p, n, sizeof (*(p))); } while (0)

/* Reallocate an array of type *P if N_ALLOC is <= N_REQUIRED.  */
#define REALLOC_IF_NECESSARY(p, n_alloc, n_required)		\
  do								\
    {								\
      if ((n_alloc) <= (n_required))				\
        {							\
          size_t new_n_alloc = (n_required) + !(p);		\
          (p) = x2nrealloc (p, &new_n_alloc, sizeof (*(p)));	\
          (n_alloc) = new_n_alloc;				\
        }							\
    }								\
  while (false)

static void
dfambcache (struct dfa *d)
{
  int i;
  for (i = CHAR_MIN; i <= CHAR_MAX; ++i)
    {
      char c = i;
      unsigned char uc = i;
      mbstate_t s = { 0 };
      wchar_t wc;
      wint_t wi;
      switch (mbrtowc (&wc, &c, 1, &s))
        {
        default: wi = wc; break;
        case (size_t) -2: wi = WEOF; break;
        case (size_t) -1: wi = uc; break;
        }
      d->mbrtowc_cache[uc] = wi;
    }
}

/* Given the dfa D, store into *PWC the result of converting the
   leading bytes of the multibyte buffer S of length N bytes, updating
   the conversion state in *MBS.  On conversion error, convert just a
   single byte as-is.  Return the number of bytes converted.

   This differs from mbrtowc (PWC, S, N, MBS) as follows:

   * Extra arg D, containing an mbrtowc_cache for speed.
   * N must be at least 1.
   * S[N - 1] must be a sentinel byte.
   * Shift encodings are not supported.
   * The return value is always in the range 1..N.
   * *MBS is always valid afterwards.
   * *PWC is always set to something.  */
static size_t
mbs_to_wchar (struct dfa *d, wchar_t *pwc, char const *s, size_t n,
              mbstate_t *mbs)
{
  unsigned char uc = s[0];
  wint_t wc = d->mbrtowc_cache[uc];

  if (wc == WEOF)
    {
      size_t nbytes = mbrtowc (pwc, s, n, mbs);
      if (0 < nbytes && nbytes < (size_t) -2)
        return nbytes;
      memset (mbs, 0, sizeof *mbs);
      wc = uc;
    }

  *pwc = wc;
  return 1;
}

#ifdef DEBUG

static void
prtok (token t)
{
  char const *s;

  if (t < 0)
    fprintf (stderr, "END");
  else if (t < NOTCHAR)
    {
      int ch = t;
      fprintf (stderr, "%c", ch);
    }
  else
    {
      switch (t)
        {
        case EMPTY:
          s = "EMPTY";
          break;
        case BACKREF:
          s = "BACKREF";
          break;
        case BEGLINE:
          s = "BEGLINE";
          break;
        case ENDLINE:
          s = "ENDLINE";
          break;
        case BEGWORD:
          s = "BEGWORD";
          break;
        case ENDWORD:
          s = "ENDWORD";
          break;
        case LIMWORD:
          s = "LIMWORD";
          break;
        case NOTLIMWORD:
          s = "NOTLIMWORD";
          break;
        case QMARK:
          s = "QMARK";
          break;
        case STAR:
          s = "STAR";
          break;
        case PLUS:
          s = "PLUS";
          break;
        case CAT:
          s = "CAT";
          break;
        case OR:
          s = "OR";
          break;
        case LPAREN:
          s = "LPAREN";
          break;
        case RPAREN:
          s = "RPAREN";
          break;
        case ANYCHAR:
          s = "ANYCHAR";
          break;
        case MBCSET:
          s = "MBCSET";
          break;
        default:
          s = "CSET";
          break;
        }
      fprintf (stderr, "%s", s);
    }
}
#endif /* DEBUG */

/* Stuff pertaining to charclasses.  */

static bool
tstbit (unsigned int b, charclass const c)
{
  return c[b / INTBITS] >> b % INTBITS & 1;
}

static void
setbit (unsigned int b, charclass c)
{
  c[b / INTBITS] |= 1U << b % INTBITS;
}

static void
clrbit (unsigned int b, charclass c)
{
  c[b / INTBITS] &= ~(1U << b % INTBITS);
}

static void
copyset (charclass const src, charclass dst)
{
  memcpy (dst, src, sizeof (charclass));
}

static void
zeroset (charclass s)
{
  memset (s, 0, sizeof (charclass));
}

static void
notset (charclass s)
{
  int i;

  for (i = 0; i < CHARCLASS_INTS; ++i)
    s[i] = ~s[i];
}

static bool
equal (charclass const s1, charclass const s2)
{
  return memcmp (s1, s2, sizeof (charclass)) == 0;
}

/* In DFA D, find the index of charclass S, or allocate a new one.  */
static size_t
dfa_charclass_index (struct dfa *d, charclass const s)
{
  size_t i;

  for (i = 0; i < d->cindex; ++i)
    if (equal (s, d->charclasses[i]))
      return i;
  REALLOC_IF_NECESSARY (d->charclasses, d->calloc, d->cindex + 1);
  ++d->cindex;
  copyset (s, d->charclasses[i]);
  return i;
}

/* A pointer to the current dfa is kept here during parsing.  */
static struct dfa *dfa;

/* Find the index of charclass S in the current DFA, or allocate a new one.  */
static size_t
charclass_index (charclass const s)
{
  return dfa_charclass_index (dfa, s);
}

/* Syntax bits controlling the behavior of the lexical analyzer.  */
static reg_syntax_t syntax_bits, syntax_bits_set;

/* Flag for case-folding letters into sets.  */
static bool case_fold;

/* End-of-line byte in data.  */
static unsigned char eolbyte;

/* Cache of char-context values.  */
static int sbit[NOTCHAR];

/* Set of characters considered letters.  */
static charclass letters;

/* Set of characters that are newline.  */
static charclass newline;

/* Add this to the test for whether a byte is word-constituent, since on
   BSD-based systems, many values in the 128..255 range are classified as
   alphabetic, while on glibc-based systems, they are not.  */
#ifdef __GLIBC__
# define is_valid_unibyte_character(c) 1
#else
# define is_valid_unibyte_character(c) (btowc (c) != WEOF)
#endif

/* Return non-zero if C is a "word-constituent" byte; zero otherwise.  */
#define IS_WORD_CONSTITUENT(C) \
  (is_valid_unibyte_character (C) && (isalnum (C) || (C) == '_'))

static int
char_context (unsigned char c)
{
  if (c == eolbyte || c == 0)
    return CTX_NEWLINE;
  if (IS_WORD_CONSTITUENT (c))
    return CTX_LETTER;
  return CTX_NONE;
}

static int
wchar_context (wint_t wc)
{
  if (wc == (wchar_t) eolbyte || wc == 0)
    return CTX_NEWLINE;
  if (wc == L'_' || iswalnum (wc))
    return CTX_LETTER;
  return CTX_NONE;
}

typedef struct regex_name_mapping_struct
{
  reg_syntax_t flag;
  const char *name;
} regex_name_mapping_t;

static regex_name_mapping_t regex_names[] = {
  {RE_BACKSLASH_ESCAPE_IN_LISTS, "backslash_escape_in_lists"},
  {RE_BK_PLUS_QM,                "bk_plus_qm"},
  {RE_CHAR_CLASSES,              "char_classes"},
  {RE_CONTEXT_INDEP_ANCHORS,     "context_indep_anchors"},
  {RE_CONTEXT_INDEP_OPS,         "context_indep_ops"},
  {RE_CONTEXT_INVALID_OPS,       "context_invalid_ops"},
  {RE_DOT_NEWLINE,               "dot_newline"},
  {RE_DOT_NOT_NULL,              "dot_not_null"},
  {RE_HAT_LISTS_NOT_NEWLINE,     "hat_lists_not_newline"},
  {RE_INTERVALS,                 "intervals"},
  {RE_LIMITED_OPS,               "limited_ops"},
  {RE_NEWLINE_ALT,               "newline_alt"},
  {RE_NO_BK_BRACES,              "no_bk_braces"},
  {RE_NO_BK_PARENS,              "no_bk_parens"},
  {RE_NO_BK_REFS,                "no_bk_refs"},
  {RE_NO_BK_VBAR,                "no_bk_vbar"},
  {RE_NO_EMPTY_RANGES,           "no_empty_ranges"},
  {RE_UNMATCHED_RIGHT_PAREN_ORD, "unmatched_right_paren_ord"},
  {RE_NO_POSIX_BACKTRACKING,     "no_posix_backtracking"},
  {RE_NO_GNU_OPS,                "no_gnu_ops"},
  {RE_DEBUG,                     "debug"},
  {RE_INVALID_INTERVAL_ORD,      "invalid_interval_ord"},
  {RE_ICASE,                     "icase"},
  {RE_CARET_ANCHORS_HERE,        "caret_anchors_here"},
  {RE_CONTEXT_INVALID_DUP,       "context_invalid_dup"},
  {RE_NO_SUB,                    "no_sub"},
  {0, NULL}
};

/* Entry point to set syntax options.  */
void
dfasyntax (reg_syntax_t bits, int fold, unsigned char eol)
{
  unsigned int i;

  /* Hook: Debug buffer to record search syntax specifications.  */
  static char buf[256];
  char *p_buf;
  char *locale;

  syntax_bits_set = 1;
  syntax_bits = bits;
  case_fold = fold != 0;
  eolbyte = eol;

  HOOK_set_up_fsa_stuff_if_not_done_already ();

  /* HOOK: Tell fsalex module about syntax selections.  */
  fsalex_syntax (lexer, bits, fold, eol);

  /* HOOK: Record syntax selections in debug logfile.  */
  if (! pll_log)
    pll_log = fopen("/tmp/parallel.log", "a");
  locale = setlocale (LC_ALL, NULL);
  fprintf(pll_log, "\nSyntax: Case fold: %d; eol char: %02x; locale: %s",
          fold, (int) eol, locale);
  p_buf = buf;
  *p_buf++ = '\n';
  *p_buf++ = ' ';
  *p_buf   = '\0';
  for (i = 0; regex_names[i].name; i++)
    {
      char flag_char = (bits & regex_names[i].flag) ? '+' : '-';
      p_buf += sprintf(p_buf, " %c%s", flag_char, regex_names[i].name);
      if (strlen (buf) >= 82)
        {
          fprintf (pll_log, "%s", buf);
          p_buf = &buf[2];
          *p_buf = '\0';
        }
    }
  fprintf(pll_log, "%s\n", buf);

  for (i = 0; i < NOTCHAR; ++i)
    {
      sbit[i] = char_context (i);
      switch (sbit[i])
        {
        case CTX_LETTER:
          setbit (i, letters);
          break;
        case CTX_NEWLINE:
          setbit (i, newline);
          break;
        }
    }
}

/* Set a bit in the charclass for the given wchar_t.  Do nothing if WC
   is represented by a multi-byte sequence.  Even for MB_CUR_MAX == 1,
   this may happen when folding case in weird Turkish locales where
   dotless i/dotted I are not included in the chosen character set.
   Return whether a bit was set in the charclass.  */
static bool
setbit_wc (wint_t wc, charclass c)
{
  int b = wctob (wc);
  if (b == EOF)
    return false;

  setbit (b, c);
  return true;
}

/* Set a bit for B and its case variants in the charclass C.
   MB_CUR_MAX must be 1.  */
static void
setbit_case_fold_c (int b, charclass c)
{
  int ub = toupper (b);
  int i;
  for (i = 0; i < NOTCHAR; i++)
    if (toupper (i) == ub)
      setbit (i, c);
}



/* UTF-8 encoding allows some optimizations that we can't otherwise
   assume in a multibyte encoding.  */
int
using_utf8 (void)
{
  static int utf8 = -1;
  if (utf8 < 0)
    {
      wchar_t wc;
      mbstate_t mbs = { 0 };
      utf8 = mbrtowc (&wc, "\xc4\x80", 2, &mbs) == 2 && wc == 0x100;
    }
  return utf8;
}

/* Return true if the current locale is known to be a unibyte locale
   without multicharacter collating sequences and where range
   comparisons simply use the native encoding.  These locales can be
   processed more efficiently.  */

static bool
using_simple_locale (void)
{
  /* True if the native character set is known to be compatible with
     the C locale.  The following test isn't perfect, but it's good
     enough in practice, as only ASCII and EBCDIC are in common use
     and this test correctly accepts ASCII and rejects EBCDIC.  */
  enum { native_c_charset =
    ('\b' == 8 && '\t' == 9 && '\n' == 10 && '\v' == 11 && '\f' == 12
     && '\r' == 13 && ' ' == 32 && '!' == 33 && '"' == 34 && '#' == 35
     && '%' == 37 && '&' == 38 && '\'' == 39 && '(' == 40 && ')' == 41
     && '*' == 42 && '+' == 43 && ',' == 44 && '-' == 45 && '.' == 46
     && '/' == 47 && '0' == 48 && '9' == 57 && ':' == 58 && ';' == 59
     && '<' == 60 && '=' == 61 && '>' == 62 && '?' == 63 && 'A' == 65
     && 'Z' == 90 && '[' == 91 && '\\' == 92 && ']' == 93 && '^' == 94
     && '_' == 95 && 'a' == 97 && 'z' == 122 && '{' == 123 && '|' == 124
     && '}' == 125 && '~' == 126)
  };

  if (! native_c_charset || MB_CUR_MAX > 1)
    return false;
  else
    {
      static int unibyte_c = -1;
      if (unibyte_c < 0)
        {
          char const *locale = setlocale (LC_ALL, NULL);
          unibyte_c = (!locale
                       || STREQ (locale, "C")
                       || STREQ (locale, "POSIX"));
        }
      return unibyte_c;
    }
}

/* Lexical analyzer.  All the dross that deals with the obnoxious
   GNU Regex syntax bits is located here.  The poor, suffering
   reader is referred to the GNU Regex documentation for the
   meaning of the @#%!@#%^!@ syntax bits.  */

static char const *lexptr;      /* Pointer to next input character.  */
static size_t lexleft;          /* Number of characters remaining.  */
static token lasttok;           /* Previous token returned; initially END.  */
static bool laststart;          /* True if we're separated from beginning or (,
                                   | only by zero-width characters.  */
static size_t parens;           /* Count of outstanding left parens.  */
static int minrep, maxrep;      /* Repeat counts for {m,n}.  */

static int cur_mb_len = 1;      /* Length of the multibyte representation of
                                   wctok.  */
/* These variables are used only if (MB_CUR_MAX > 1).  */
static mbstate_t mbs;           /* mbstate for mbrtowc.  */
static wchar_t wctok;           /* Wide character representation of the current
                                   multibyte character.  */
static unsigned char const *buf_begin;  /* reference to begin in dfaexec.  */
static unsigned char const *buf_end;    /* reference to end in dfaexec.  */


/* Note that characters become unsigned here.  */
# define FETCH_WC(c, wc, eoferr)		\
  do {						\
    if (! lexleft)				\
      {						\
        if ((eoferr) != 0)			\
          dfaerror (eoferr);			\
        else					\
          return lasttok = END;			\
      }						\
    else					\
      {						\
        wchar_t _wc;				\
        size_t nbytes = mbs_to_wchar (dfa, &_wc, lexptr, lexleft, &mbs); \
        cur_mb_len = nbytes;			\
        (wc) = _wc;				\
        (c) = nbytes == 1 ? to_uchar (*lexptr) : EOF;    \
        lexptr += nbytes;			\
        lexleft -= nbytes;			\
      }						\
  } while (0)

#ifndef MIN
# define MIN(a,b) ((a) < (b) ? (a) : (b))
#endif
#ifndef MAX
# define MAX(a,b) ((a) < (b) ? (b) : (a))
#endif

/* The set of wchar_t values C such that there's a useful locale
   somewhere where C != towupper (C) && C != towlower (towupper (C)).
   For example, 0x00B5 (U+00B5 MICRO SIGN) is in this table, because
   towupper (0x00B5) == 0x039C (U+039C GREEK CAPITAL LETTER MU), and
   towlower (0x039C) == 0x03BC (U+03BC GREEK SMALL LETTER MU).  */
static short const lonesome_lower[] =
  {
    0x00B5, 0x0131, 0x017F, 0x01C5, 0x01C8, 0x01CB, 0x01F2, 0x0345,
    0x03C2, 0x03D0, 0x03D1, 0x03D5, 0x03D6, 0x03F0, 0x03F1,

    /* U+03F2 GREEK LUNATE SIGMA SYMBOL lacks a specific uppercase
       counterpart in locales predating Unicode 4.0.0 (April 2003).  */
    0x03F2,

    0x03F5, 0x1E9B, 0x1FBE,
  };

/* Maximum number of characters that can be the case-folded
   counterparts of a single character, not counting the character
   itself.  This is 1 for towupper, 1 for towlower, and 1 for each
   entry in LONESOME_LOWER.  */
enum
{ CASE_FOLDED_BUFSIZE = 2 + sizeof lonesome_lower / sizeof *lonesome_lower };

/* Find the characters equal to C after case-folding, other than C
   itself, and store them into FOLDED.  Return the number of characters
   stored.  */
static int
case_folded_counterparts (wchar_t c, wchar_t folded[CASE_FOLDED_BUFSIZE])
{
  int i;
  int n = 0;
  wint_t uc = towupper (c);
  wint_t lc = towlower (uc);
  if (uc != c)
    folded[n++] = uc;
  if (lc != uc && lc != c && towupper (lc) == uc)
    folded[n++] = lc;
  for (i = 0; i < sizeof lonesome_lower / sizeof *lonesome_lower; i++)
    {
      wint_t li = lonesome_lower[i];
      if (li != lc && li != uc && li != c && towupper (li) == uc)
        folded[n++] = li;
    }
  return n;
}

typedef int predicate (int);

/* The following list maps the names of the Posix named character classes
   to predicate functions that determine whether a given character is in
   the class.  The leading [ has already been eaten by the lexical
   analyzer.  */
struct dfa_ctype
{
  const char *name;
  predicate *func;
  bool single_byte_only;
};

static const struct dfa_ctype prednames[] = {
  {"alpha", isalpha, false},
  {"upper", isupper, false},
  {"lower", islower, false},
  {"digit", isdigit, true},
  {"xdigit", isxdigit, false},
  {"space", isspace, false},
  {"punct", ispunct, false},
  {"alnum", isalnum, false},
  {"print", isprint, false},
  {"graph", isgraph, false},
  {"cntrl", iscntrl, false},
  {"blank", isblank, false},
  {NULL, NULL, false}
};

static const struct dfa_ctype *_GL_ATTRIBUTE_PURE
find_pred (const char *str)
{
  unsigned int i;
  for (i = 0; prednames[i].name; ++i)
    if (STREQ (str, prednames[i].name))
      break;

  return &prednames[i];
}

/* Multibyte character handling sub-routine for lex.
   Parse a bracket expression and build a struct mb_char_classes.  */
static token
parse_bracket_exp (void)
{
  bool invert;
  int c, c1, c2;
  charclass ccl;

  /* True if this is a bracket expression that dfaexec is known to
     process correctly.  */
  bool known_bracket_exp = true;

  /* Used to warn about [:space:].
     Bit 0 = first character is a colon.
     Bit 1 = last character is a colon.
     Bit 2 = includes any other character but a colon.
     Bit 3 = includes ranges, char/equiv classes or collation elements.  */
  int colon_warning_state;

  wint_t wc;
  wint_t wc2;
  wint_t wc1 = 0;

  /* Work area to build a mb_char_classes.  */
  struct mb_char_classes *work_mbc;
  size_t chars_al, range_sts_al, range_ends_al, ch_classes_al,
    equivs_al, coll_elems_al;

  chars_al = 0;
  range_sts_al = range_ends_al = 0;
  ch_classes_al = equivs_al = coll_elems_al = 0;
  if (MB_CUR_MAX > 1)
    {
      REALLOC_IF_NECESSARY (dfa->mbcsets, dfa->mbcsets_alloc,
                            dfa->nmbcsets + 1);

      /* dfa->multibyte_prop[] hold the index of dfa->mbcsets.
         We will update dfa->multibyte_prop[] in addtok, because we can't
         decide the index in dfa->tokens[].  */

      /* Initialize work area.  */
      work_mbc = &(dfa->mbcsets[dfa->nmbcsets++]);
      memset (work_mbc, 0, sizeof *work_mbc);
    }
  else
    work_mbc = NULL;

  memset (ccl, 0, sizeof ccl);
  FETCH_WC (c, wc, _("unbalanced ["));
  if (c == '^')
    {
      FETCH_WC (c, wc, _("unbalanced ["));
      invert = true;
      known_bracket_exp = using_simple_locale ();
    }
  else
    invert = false;

  colon_warning_state = (c == ':');
  do
    {
      c1 = EOF;                 /* mark c1 is not initialized".  */
      colon_warning_state &= ~2;

      /* Note that if we're looking at some other [:...:] construct,
         we just treat it as a bunch of ordinary characters.  We can do
         this because we assume regex has checked for syntax errors before
         dfa is ever called.  */
      if (c == '[')
        {
#define MAX_BRACKET_STRING_LEN 32
          char str[MAX_BRACKET_STRING_LEN + 1];
          FETCH_WC (c1, wc1, _("unbalanced ["));

          if ((c1 == ':' && (syntax_bits & RE_CHAR_CLASSES))
              || c1 == '.' || c1 == '=')
            {
              size_t len = 0;
              for (;;)
                {
                  FETCH_WC (c, wc, _("unbalanced ["));
                  if ((c == c1 && *lexptr == ']') || lexleft == 0)
                    break;
                  if (len < MAX_BRACKET_STRING_LEN)
                    str[len++] = c;
                  else
                    /* This is in any case an invalid class name.  */
                    str[0] = '\0';
                }
              str[len] = '\0';

              /* Fetch bracket.  */
              FETCH_WC (c, wc, _("unbalanced ["));
              if (c1 == ':')
                /* Build character class.  POSIX allows character
                   classes to match multicharacter collating elements,
                   but the regex code does not support that, so do not
                   worry about that possibility.  */
                {
                  char const *class
                    = (case_fold && (STREQ (str, "upper")
                                     || STREQ (str, "lower")) ? "alpha" : str);
                  const struct dfa_ctype *pred = find_pred (class);
                  if (!pred)
                    dfaerror (_("invalid character class"));

                  if (MB_CUR_MAX > 1 && !pred->single_byte_only)
                    {
                      /* Store the character class as wctype_t.  */
                      wctype_t wt = wctype (class);

                      REALLOC_IF_NECESSARY (work_mbc->ch_classes,
                                            ch_classes_al,
                                            work_mbc->nch_classes + 1);
                      work_mbc->ch_classes[work_mbc->nch_classes++] = wt;
                    }

                  for (c2 = 0; c2 < NOTCHAR; ++c2)
                    if (pred->func (c2))
                      setbit (c2, ccl);
                }
              else
                known_bracket_exp = false;

              colon_warning_state |= 8;

              /* Fetch new lookahead character.  */
              FETCH_WC (c1, wc1, _("unbalanced ["));
              continue;
            }

          /* We treat '[' as a normal character here.  c/c1/wc/wc1
             are already set up.  */
        }

      if (c == '\\' && (syntax_bits & RE_BACKSLASH_ESCAPE_IN_LISTS))
        FETCH_WC (c, wc, _("unbalanced ["));

      if (c1 == EOF)
        FETCH_WC (c1, wc1, _("unbalanced ["));

      if (c1 == '-')
        /* build range characters.  */
        {
          FETCH_WC (c2, wc2, _("unbalanced ["));

          /* A bracket expression like [a-[.aa.]] matches an unknown set.
             Treat it like [-a[.aa.]] while parsing it, and
             remember that the set is unknown.  */
          if (c2 == '[' && *lexptr == '.')
            {
              known_bracket_exp = false;
              c2 = ']';
            }

          if (c2 != ']')
            {
              if (c2 == '\\' && (syntax_bits & RE_BACKSLASH_ESCAPE_IN_LISTS))
                FETCH_WC (c2, wc2, _("unbalanced ["));

              if (MB_CUR_MAX > 1)
                {
                  /* When case folding map a range, say [m-z] (or even [M-z])
                     to the pair of ranges, [m-z] [M-Z].  Although this code
                     is wrong in multiple ways, it's never used in practice.
                     FIXME: Remove this (and related) unused code.  */
                  REALLOC_IF_NECESSARY (work_mbc->range_sts,
                                        range_sts_al, work_mbc->nranges + 1);
                  REALLOC_IF_NECESSARY (work_mbc->range_ends,
                                        range_ends_al, work_mbc->nranges + 1);
                  work_mbc->range_sts[work_mbc->nranges] =
                    case_fold ? towlower (wc) : (wchar_t) wc;
                  work_mbc->range_ends[work_mbc->nranges++] =
                    case_fold ? towlower (wc2) : (wchar_t) wc2;

                  if (case_fold && (iswalpha (wc) || iswalpha (wc2)))
                    {
                      REALLOC_IF_NECESSARY (work_mbc->range_sts,
                                            range_sts_al, work_mbc->nranges + 1);
                      work_mbc->range_sts[work_mbc->nranges] = towupper (wc);
                      REALLOC_IF_NECESSARY (work_mbc->range_ends,
                                            range_ends_al, work_mbc->nranges + 1);
                      work_mbc->range_ends[work_mbc->nranges++] = towupper (wc2);
                    }
                }
              else if (using_simple_locale ())
                {
                  for (c1 = c; c1 <= c2; c1++)
                    setbit (c1, ccl);
                  if (case_fold)
                    {
                      int uc = toupper (c);
                      int uc2 = toupper (c2);
                      for (c1 = 0; c1 < NOTCHAR; c1++)
                        {
                          int uc1 = toupper (c1);
                          if (uc <= uc1 && uc1 <= uc2)
                            setbit (c1, ccl);
                        }
                    }
                }
              else
                known_bracket_exp = false;

              colon_warning_state |= 8;
              FETCH_WC (c1, wc1, _("unbalanced ["));
              continue;
            }

          /* In the case [x-], the - is an ordinary hyphen,
             which is left in c1, the lookahead character.  */
          lexptr -= cur_mb_len;
          lexleft += cur_mb_len;
        }

      colon_warning_state |= (c == ':') ? 2 : 4;

      if (MB_CUR_MAX == 1)
        {
          if (case_fold)
            setbit_case_fold_c (c, ccl);
          else
            setbit (c, ccl);
          continue;
        }

      if (case_fold)
        {
          wchar_t folded[CASE_FOLDED_BUFSIZE];
          int i, n = case_folded_counterparts (wc, folded);
          REALLOC_IF_NECESSARY (work_mbc->chars, chars_al,
                                work_mbc->nchars + n);
          for (i = 0; i < n; i++)
            if (!setbit_wc (folded[i], ccl))
              work_mbc->chars[work_mbc->nchars++] = folded[i];
        }
      if (!setbit_wc (wc, ccl))
        {
          REALLOC_IF_NECESSARY (work_mbc->chars, chars_al,
                                work_mbc->nchars + 1);
          work_mbc->chars[work_mbc->nchars++] = wc;
        }
    }
  while ((wc = wc1, (c = c1) != ']'));

  if (colon_warning_state == 7)
    dfawarn (_("character class syntax is [[:space:]], not [:space:]"));

  if (! known_bracket_exp)
    return BACKREF;

  if (MB_CUR_MAX > 1)
    {
      static charclass zeroclass;
      work_mbc->invert = invert;
      work_mbc->cset = equal (ccl, zeroclass) ? -1 : charclass_index (ccl);
      return MBCSET;
    }

  if (invert)
    {
      assert (MB_CUR_MAX == 1);
      notset (ccl);
      if (syntax_bits & RE_HAT_LISTS_NOT_NEWLINE)
        clrbit (eolbyte, ccl);
    }

  return CSET + charclass_index (ccl);
}

static token
original_lex (void)
{
  unsigned int c, c2;
  bool backslash = false;
  charclass ccl;
  int i;

  /* Basic plan: We fetch a character.  If it's a backslash,
     we set the backslash flag and go through the loop again.
     On the plus side, this avoids having a duplicate of the
     main switch inside the backslash case.  On the minus side,
     it means that just about every case begins with
     "if (backslash) ...".  */
  for (i = 0; i < 2; ++i)
    {
      FETCH_WC (c, wctok, NULL);
      if (c == (unsigned int) EOF)
        goto normal_char;

      switch (c)
        {
        case '\\':
          if (backslash)
            goto normal_char;
          if (lexleft == 0)
            dfaerror (_("unfinished \\ escape"));
          backslash = true;
          break;

        case '^':
          if (backslash)
            goto normal_char;
          if (syntax_bits & RE_CONTEXT_INDEP_ANCHORS
              || lasttok == END || lasttok == LPAREN || lasttok == OR)
            return lasttok = BEGLINE;
          goto normal_char;

        case '$':
          if (backslash)
            goto normal_char;
          if (syntax_bits & RE_CONTEXT_INDEP_ANCHORS
              || lexleft == 0
              || (syntax_bits & RE_NO_BK_PARENS
                  ? lexleft > 0 && *lexptr == ')'
                  : lexleft > 1 && lexptr[0] == '\\' && lexptr[1] == ')')
              || (syntax_bits & RE_NO_BK_VBAR
                  ? lexleft > 0 && *lexptr == '|'
                  : lexleft > 1 && lexptr[0] == '\\' && lexptr[1] == '|')
              || ((syntax_bits & RE_NEWLINE_ALT)
                  && lexleft > 0 && *lexptr == '\n'))
            return lasttok = ENDLINE;
          goto normal_char;

        case '1':
        case '2':
        case '3':
        case '4':
        case '5':
        case '6':
        case '7':
        case '8':
        case '9':
          if (backslash && !(syntax_bits & RE_NO_BK_REFS))
            {
              laststart = false;
              return lasttok = BACKREF;
            }
          goto normal_char;

        case '`':
          if (backslash && !(syntax_bits & RE_NO_GNU_OPS))
            return lasttok = BEGLINE; /* FIXME: should be beginning of string */
          goto normal_char;

        case '\'':
          if (backslash && !(syntax_bits & RE_NO_GNU_OPS))
            return lasttok = ENDLINE;   /* FIXME: should be end of string */
          goto normal_char;

        case '<':
          if (backslash && !(syntax_bits & RE_NO_GNU_OPS))
            return lasttok = BEGWORD;
          goto normal_char;

        case '>':
          if (backslash && !(syntax_bits & RE_NO_GNU_OPS))
            return lasttok = ENDWORD;
          goto normal_char;

        case 'b':
          if (backslash && !(syntax_bits & RE_NO_GNU_OPS))
            return lasttok = LIMWORD;
          goto normal_char;

        case 'B':
          if (backslash && !(syntax_bits & RE_NO_GNU_OPS))
            return lasttok = NOTLIMWORD;
          goto normal_char;

        case '?':
          if (syntax_bits & RE_LIMITED_OPS)
            goto normal_char;
          if (backslash != ((syntax_bits & RE_BK_PLUS_QM) != 0))
            goto normal_char;
          if (!(syntax_bits & RE_CONTEXT_INDEP_OPS) && laststart)
            goto normal_char;
          return lasttok = QMARK;

        case '*':
          if (backslash)
            goto normal_char;
          if (!(syntax_bits & RE_CONTEXT_INDEP_OPS) && laststart)
            goto normal_char;
          return lasttok = STAR;

        case '+':
          if (syntax_bits & RE_LIMITED_OPS)
            goto normal_char;
          if (backslash != ((syntax_bits & RE_BK_PLUS_QM) != 0))
            goto normal_char;
          if (!(syntax_bits & RE_CONTEXT_INDEP_OPS) && laststart)
            goto normal_char;
          return lasttok = PLUS;

        case '{':
          if (!(syntax_bits & RE_INTERVALS))
            goto normal_char;
          if (backslash != ((syntax_bits & RE_NO_BK_BRACES) == 0))
            goto normal_char;
          if (!(syntax_bits & RE_CONTEXT_INDEP_OPS) && laststart)
            goto normal_char;

          /* Cases:
             {M} - exact count
             {M,} - minimum count, maximum is infinity
             {,N} - 0 through N
             {,} - 0 to infinity (same as '*')
             {M,N} - M through N */
          {
            char const *p = lexptr;
            char const *lim = p + lexleft;
            minrep = maxrep = -1;
            for (; p != lim && ISASCIIDIGIT (*p); p++)
              {
                if (minrep < 0)
                  minrep = *p - '0';
                else
                  minrep = MIN (RE_DUP_MAX + 1, minrep * 10 + *p - '0');
              }
            if (p != lim)
              {
                if (*p != ',')
                  maxrep = minrep;
                else
                  {
                    if (minrep < 0)
                      minrep = 0;
                    while (++p != lim && ISASCIIDIGIT (*p))
                      {
                        if (maxrep < 0)
                          maxrep = *p - '0';
                        else
                          maxrep = MIN (RE_DUP_MAX + 1, maxrep * 10 + *p - '0');
                      }
                  }
              }
            if (! ((! backslash || (p != lim && *p++ == '\\'))
                   && p != lim && *p++ == '}'
                   && 0 <= minrep && (maxrep < 0 || minrep <= maxrep)))
              {
                if (syntax_bits & RE_INVALID_INTERVAL_ORD)
                  goto normal_char;
                dfaerror (_("Invalid content of \\{\\}"));
              }
            if (RE_DUP_MAX < maxrep)
              dfaerror (_("Regular expression too big"));
            lexptr = p;
            lexleft = lim - p;
          }
          laststart = false;
          return lasttok = REPMN;

        case '|':
          if (syntax_bits & RE_LIMITED_OPS)
            goto normal_char;
          if (backslash != ((syntax_bits & RE_NO_BK_VBAR) == 0))
            goto normal_char;
          laststart = true;
          return lasttok = OR;

        case '\n':
          if (syntax_bits & RE_LIMITED_OPS
              || backslash || !(syntax_bits & RE_NEWLINE_ALT))
            goto normal_char;
          laststart = true;
          return lasttok = OR;

        case '(':
          if (backslash != ((syntax_bits & RE_NO_BK_PARENS) == 0))
            goto normal_char;
          ++parens;
          laststart = true;
          return lasttok = LPAREN;

        case ')':
          if (backslash != ((syntax_bits & RE_NO_BK_PARENS) == 0))
            goto normal_char;
          if (parens == 0 && syntax_bits & RE_UNMATCHED_RIGHT_PAREN_ORD)
            goto normal_char;
          --parens;
          laststart = false;
          return lasttok = RPAREN;

        case '.':
          if (backslash)
            goto normal_char;
          if (MB_CUR_MAX > 1)
            {
              /* In multibyte environment period must match with a single
                 character not a byte.  So we use ANYCHAR.  */
              laststart = false;
              return lasttok = ANYCHAR;
            }
          zeroset (ccl);
          notset (ccl);
          if (!(syntax_bits & RE_DOT_NEWLINE))
            clrbit (eolbyte, ccl);
          if (syntax_bits & RE_DOT_NOT_NULL)
            clrbit ('\0', ccl);
          laststart = false;
          return lasttok = CSET + charclass_index (ccl);

        case 's':
        case 'S':
          if (!backslash || (syntax_bits & RE_NO_GNU_OPS))
            goto normal_char;
          if (MB_CUR_MAX == 1)
            {
              zeroset (ccl);
              for (c2 = 0; c2 < NOTCHAR; ++c2)
                if (isspace (c2))
                  setbit (c2, ccl);
              if (c == 'S')
                notset (ccl);
              laststart = false;
              return lasttok = CSET + charclass_index (ccl);
            }

#define PUSH_LEX_STATE(s)			\
  do						\
    {						\
      char const *lexptr_saved = lexptr;	\
      size_t lexleft_saved = lexleft;		\
      lexptr = (s);				\
      lexleft = strlen (lexptr)

#define POP_LEX_STATE()				\
      lexptr = lexptr_saved;			\
      lexleft = lexleft_saved;			\
    }						\
  while (0)

          /* FIXME: see if optimizing this, as is done with ANYCHAR and
             add_utf8_anychar, makes sense.  */

          /* \s and \S are documented to be equivalent to [[:space:]] and
             [^[:space:]] respectively, so tell the lexer to process those
             strings, each minus its "already processed" '['.  */
          PUSH_LEX_STATE (c == 's' ? "[:space:]]" : "^[:space:]]");

          lasttok = parse_bracket_exp ();

          POP_LEX_STATE ();

          laststart = false;
          return lasttok;

        case 'w':
        case 'W':
          if (!backslash || (syntax_bits & RE_NO_GNU_OPS))
            goto normal_char;
          zeroset (ccl);
          for (c2 = 0; c2 < NOTCHAR; ++c2)
            if (IS_WORD_CONSTITUENT (c2))
              setbit (c2, ccl);
          if (c == 'W')
            notset (ccl);
          laststart = false;
          return lasttok = CSET + charclass_index (ccl);

        case '[':
          if (backslash)
            goto normal_char;
          laststart = false;
          return lasttok = parse_bracket_exp ();

        default:
        normal_char:
          laststart = false;
          /* For multibyte character sets, folding is done in atom.  Always
             return WCHAR.  */
          if (MB_CUR_MAX > 1)
            return lasttok = WCHAR;

          if (case_fold && isalpha (c))
            {
              zeroset (ccl);
              setbit_case_fold_c (c, ccl);
              return lasttok = CSET + charclass_index (ccl);
            }

          return lasttok = c;
        }
    }

  /* The above loop should consume at most a backslash
     and some other character.  */
  abort ();
  return END;                   /* keeps pedantic compilers happy.  */
}

static token
lex (void)
{
  token            original_token;
  fsatoken_token_t fsalex_token;

  original_token = original_lex ();
  fsalex_token   = fsalex_lex (lexer);

  fprintf (pll_log, "Token debug: Original, fsalex: %08lx %08lx\n",
         original_token, fsalex_token);

  if (fsalex_token == FSATOKEN_TK_REPMN)
    {
      int x_minrep, x_maxrep;
      x_minrep = fsalex_exchange(lexer,
                                 PROTO_LEXPARSE_OP_GET_REPMN_MIN, NULL);
      x_maxrep = fsalex_exchange(lexer,
                                 PROTO_LEXPARSE_OP_GET_REPMN_MAX, NULL);
      fprintf (pll_log, "       Original REPMN{%d,%d};  ", minrep, maxrep);
      fprintf (pll_log, "  FSATOKEN_TK_REPMN{%d,%d}\n", x_minrep, x_maxrep);
    }

  else if (fsalex_token >= FSATOKEN_TK_CSET)
    {
      size_t index;
      unsigned int * orig_ccl;
      int i;
      charclass_t *charset;
      char *description;
      static char buf[256];
      char *p_buf;

/* Nybble (4bit)-to-char conversion array for little-bit-endian nybbles.  */
static const char *disp_nybble = "084c2a6e195d3b7f";

      /* Report details of the original charclas produced by dfa.c.  */
      index = original_token - CSET;
      p_buf = buf;
      orig_ccl = dfa->charclasses[index];
      for (i = 0; i < CHARCLASS_INTS; i += 2)
        {
          int j = orig_ccl[i];
          *p_buf++ = ' ';
          *p_buf++ = disp_nybble[(j >>  0) & 0x0f];
          *p_buf++ = disp_nybble[(j >>  4) & 0x0f];
          *p_buf++ = disp_nybble[(j >>  8) & 0x0f];
          *p_buf++ = disp_nybble[(j >> 12) & 0x0f];
          *p_buf++ = disp_nybble[(j >> 16) & 0x0f];
          *p_buf++ = disp_nybble[(j >> 20) & 0x0f];
          *p_buf++ = disp_nybble[(j >> 24) & 0x0f];
          *p_buf++ = disp_nybble[(j >> 28) & 0x0f];

          j = orig_ccl[i + 1];
          *p_buf++ = disp_nybble[(j >>  0) & 0x0f];
          *p_buf++ = disp_nybble[(j >>  4) & 0x0f];
          *p_buf++ = disp_nybble[(j >>  8) & 0x0f];
          *p_buf++ = disp_nybble[(j >> 12) & 0x0f];
          *p_buf++ = disp_nybble[(j >> 16) & 0x0f];
          *p_buf++ = disp_nybble[(j >> 20) & 0x0f];
          *p_buf++ = disp_nybble[(j >> 24) & 0x0f];
          *p_buf++ = disp_nybble[(j >> 28) & 0x0f];
        }
      *p_buf++ = '\0';
      fprintf (pll_log, "              original [%3lu]:%s\n", index, buf);

      /* Also report the charclass member details from fsalex etc.  */
      index = fsalex_token - FSATOKEN_TK_CSET;
      charset = charclass_get_pointer (index);
      description = charclass_describe (charset);
      index = charclass_get_index (charset);
      fprintf (pll_log, "    fsalex: [%3lu] %s\n", index, description);
    }

  return original_token;
}

static void
show_musts (const char *title, fsamusts_list_element_t *list)
{
  fsamusts_list_element_t *elem;
  static char buf[256];
  char *p_buf;

  fprintf(pll_log, "\n%s:\n", title);

  p_buf = buf;
  for (elem = list; elem != NULL; elem = elem->next)
    {
      if (((p_buf - buf) + 4 + strlen (elem->must)) > 72)
        {
          fprintf(pll_log, " %s\n", buf);
          p_buf = buf;
        }
        p_buf += sprintf(p_buf, " (%s) >%s<",
                         elem->exact ? "Entire" : "partial",
                         elem->must);
    }
  fprintf(pll_log, "%s\n", buf);
}

/* Recursive descent parser for regular expressions.  */

static token tok;               /* Lookahead token.  */
static size_t depth;            /* Current depth of a hypothetical stack
                                   holding deferred productions.  This is
                                   used to determine the depth that will be
                                   required of the real stack later on in
                                   dfaanalyze.  */

static void
addtok_mb (token t, int mbprop)
{
  if (MB_CUR_MAX > 1)
    {
      REALLOC_IF_NECESSARY (dfa->multibyte_prop, dfa->nmultibyte_prop,
                            dfa->tindex + 1);
      dfa->multibyte_prop[dfa->tindex] = mbprop;
    }

  REALLOC_IF_NECESSARY (dfa->tokens, dfa->talloc, dfa->tindex + 1);
  dfa->tokens[dfa->tindex++] = t;

  switch (t)
    {
    case QMARK:
    case STAR:
    case PLUS:
      break;

    case CAT:
    case OR:
      --depth;
      break;

    default:
      ++dfa->nleaves;
    case EMPTY:
      ++depth;
      break;
    }
  if (depth > dfa->depth)
    dfa->depth = depth;
}

static void addtok_wc (wint_t wc);

/* Add the given token to the parse tree, maintaining the depth count and
   updating the maximum depth if necessary.  */
static void
addtok (token t)
{
  if (MB_CUR_MAX > 1 && t == MBCSET)
    {
      bool need_or = false;
      struct mb_char_classes *work_mbc = &dfa->mbcsets[dfa->nmbcsets - 1];

      /* Extract wide characters into alternations for better performance.
         This does not require UTF-8.  */
      if (!work_mbc->invert)
        {
          size_t i;
          for (i = 0; i < work_mbc->nchars; i++)
            {
              addtok_wc (work_mbc->chars[i]);
              if (need_or)
                addtok (OR);
              need_or = true;
            }
          work_mbc->nchars = 0;
        }

      /* If the MBCSET is non-inverted and doesn't include neither
         character classes including multibyte characters, range
         expressions, equivalence classes nor collating elements,
         it can be replaced to a simple CSET. */
      if (work_mbc->invert
          || work_mbc->nch_classes != 0
          || work_mbc->nranges != 0
          || work_mbc->nequivs != 0 || work_mbc->ncoll_elems != 0)
        {
          addtok_mb (MBCSET, ((dfa->nmbcsets - 1) << 2) + 3);
          if (need_or)
            addtok (OR);
        }
      else
        {
          /* Characters have been handled above, so it is possible
             that the mbcset is empty now.  Do nothing in that case.  */
          if (work_mbc->cset != -1)
            {
              addtok (CSET + work_mbc->cset);
              if (need_or)
                addtok (OR);
            }
        }
    }
  else
    {
      addtok_mb (t, 3);
    }
}

/* We treat a multibyte character as a single atom, so that DFA
   can treat a multibyte character as a single expression.

   e.g., we construct the following tree from "<mb1><mb2>".
   <mb1(1st-byte)><mb1(2nd-byte)><CAT><mb1(3rd-byte)><CAT>
   <mb2(1st-byte)><mb2(2nd-byte)><CAT><mb2(3rd-byte)><CAT><CAT> */
static void
addtok_wc (wint_t wc)
{
  unsigned char buf[MB_LEN_MAX];
  mbstate_t s = { 0 };
  int i;
  size_t stored_bytes = wcrtomb ((char *) buf, wc, &s);

  if (stored_bytes != (size_t) -1)
    cur_mb_len = stored_bytes;
  else
    {
      /* This is merely stop-gap.  buf[0] is undefined, yet skipping
         the addtok_mb call altogether can corrupt the heap.  */
      cur_mb_len = 1;
      buf[0] = 0;
    }

  addtok_mb (buf[0], cur_mb_len == 1 ? 3 : 1);
  for (i = 1; i < cur_mb_len; i++)
    {
      addtok_mb (buf[i], i == cur_mb_len - 1 ? 2 : 0);
      addtok (CAT);
    }
}

static void
add_utf8_anychar (void)
{
  static const charclass utf8_classes[5] = {
    {0, 0, 0, 0, ~0, ~0, 0, 0},		/* 80-bf: non-leading bytes */
    {~0, ~0, ~0, ~0, 0, 0, 0, 0},       /* 00-7f: 1-byte sequence */
    {0, 0, 0, 0, 0, 0, ~3, 0},          /* c2-df: 2-byte sequence */
    {0, 0, 0, 0, 0, 0, 0, 0xffff},      /* e0-ef: 3-byte sequence */
    {0, 0, 0, 0, 0, 0, 0, 0xff0000}     /* f0-f7: 4-byte sequence */
  };
  const unsigned int n = sizeof (utf8_classes) / sizeof (utf8_classes[0]);
  unsigned int i;

  /* Define the five character classes that are needed below.  */
  if (dfa->utf8_anychar_classes[0] == 0)
    for (i = 0; i < n; i++)
      {
        charclass c;
        copyset (utf8_classes[i], c);
        if (i == 1)
          {
            if (!(syntax_bits & RE_DOT_NEWLINE))
              clrbit (eolbyte, c);
            if (syntax_bits & RE_DOT_NOT_NULL)
              clrbit ('\0', c);
          }
        dfa->utf8_anychar_classes[i] = CSET + charclass_index (c);
      }

  /* A valid UTF-8 character is

     ([0x00-0x7f]
     |[0xc2-0xdf][0x80-0xbf]
     |[0xe0-0xef[0x80-0xbf][0x80-0xbf]
     |[0xf0-f7][0x80-0xbf][0x80-0xbf][0x80-0xbf])

     which I'll write more concisely "B|CA|DAA|EAAA".  Factor the [0x00-0x7f]
     and you get "B|(C|(D|EA)A)A".  And since the token buffer is in reverse
     Polish notation, you get "B C D E A CAT OR A CAT OR A CAT OR".  */
  for (i = 1; i < n; i++)
    addtok (dfa->utf8_anychar_classes[i]);
  while (--i > 1)
    {
      addtok (dfa->utf8_anychar_classes[0]);
      addtok (CAT);
      addtok (OR);
    }
}

/* The grammar understood by the parser is as follows.

   regexp:
     regexp OR branch
     branch

   branch:
     branch closure
     closure

   closure:
     closure QMARK
     closure STAR
     closure PLUS
     closure REPMN
     atom

   atom:
     <normal character>
     <multibyte character>
     ANYCHAR
     MBCSET
     CSET
     BACKREF
     BEGLINE
     ENDLINE
     BEGWORD
     ENDWORD
     LIMWORD
     NOTLIMWORD
     LPAREN regexp RPAREN
     <empty>

   The parser builds a parse tree in postfix form in an array of tokens.  */

static void
atom (void)
{
  if (tok == WCHAR)
    {
      addtok_wc (wctok);

      if (case_fold)
        {
          wchar_t folded[CASE_FOLDED_BUFSIZE];
          int i, n = case_folded_counterparts (wctok, folded);
          for (i = 0; i < n; i++)
            {
              addtok_wc (folded[i]);
              addtok (OR);
            }
        }

      tok = lex ();
    }
  else if (tok == ANYCHAR && using_utf8 ())
    {
      /* For UTF-8 expand the period to a series of CSETs that define a valid
         UTF-8 character.  This avoids using the slow multibyte path.  I'm
         pretty sure it would be both profitable and correct to do it for
         any encoding; however, the optimization must be done manually as
         it is done above in add_utf8_anychar.  So, let's start with
         UTF-8: it is the most used, and the structure of the encoding
         makes the correctness more obvious.  */
      add_utf8_anychar ();
      tok = lex ();
    }
  else if ((tok >= 0 && tok < NOTCHAR) || tok >= CSET || tok == BACKREF
           || tok == BEGLINE || tok == ENDLINE || tok == BEGWORD
           || tok == ANYCHAR || tok == MBCSET
           || tok == ENDWORD || tok == LIMWORD || tok == NOTLIMWORD)
    {
      addtok (tok);
      tok = lex ();
    }
  else if (tok == LPAREN)
    {
      tok = lex ();
      regexp ();
      if (tok != RPAREN)
        dfaerror (_("unbalanced ("));
      tok = lex ();
    }
  else
    addtok (EMPTY);
}

/* Return the number of tokens in the given subexpression.  */
static size_t _GL_ATTRIBUTE_PURE
nsubtoks (size_t tindex)
{
  size_t ntoks1;

  switch (dfa->tokens[tindex - 1])
    {
    default:
      return 1;
    case QMARK:
    case STAR:
    case PLUS:
      return 1 + nsubtoks (tindex - 1);
    case CAT:
    case OR:
      ntoks1 = nsubtoks (tindex - 1);
      return 1 + ntoks1 + nsubtoks (tindex - 1 - ntoks1);
    }
}

/* Copy the given subexpression to the top of the tree.  */
static void
copytoks (size_t tindex, size_t ntokens)
{
  size_t i;

  if (MB_CUR_MAX > 1)
    for (i = 0; i < ntokens; ++i)
      addtok_mb (dfa->tokens[tindex + i], dfa->multibyte_prop[tindex + i]);
  else
    for (i = 0; i < ntokens; ++i)
      addtok_mb (dfa->tokens[tindex + i], 3);
}

static void
closure (void)
{
  int i;
  size_t tindex, ntokens;

  atom ();
  while (tok == QMARK || tok == STAR || tok == PLUS || tok == REPMN)
    if (tok == REPMN && (minrep || maxrep))
      {
        ntokens = nsubtoks (dfa->tindex);
        tindex = dfa->tindex - ntokens;
        if (maxrep < 0)
          addtok (PLUS);
        if (minrep == 0)
          addtok (QMARK);
        for (i = 1; i < minrep; ++i)
          {
            copytoks (tindex, ntokens);
            addtok (CAT);
          }
        for (; i < maxrep; ++i)
          {
            copytoks (tindex, ntokens);
            addtok (QMARK);
            addtok (CAT);
          }
        tok = lex ();
      }
    else if (tok == REPMN)
      {
        dfa->tindex -= nsubtoks (dfa->tindex);
        tok = lex ();
        closure ();
      }
    else
      {
        addtok (tok);
        tok = lex ();
      }
}

static void
branch (void)
{
  closure ();
  while (tok != RPAREN && tok != OR && tok >= 0)
    {
      closure ();
      addtok (CAT);
    }
}

static void
regexp (void)
{
  branch ();
  while (tok == OR)
    {
      tok = lex ();
      branch ();
      addtok (OR);
    }
}

/* Main entry point for the parser.  S is a string to be parsed, len is the
   length of the string, so s can include NUL characters.  D is a pointer to
   the struct dfa to parse into.  */
void
dfaparse (char const *s, size_t len, struct dfa *d)
{
  size_t i;
  dfa = d;
  lexptr = s;
  lexleft = len;

  HOOK_set_up_fsa_stuff_if_not_done_already ();

  /* HOOK: Tell fsalex about this pattern.  */
  fsalex_pattern (lexer, s, len);

  /* HOOK: Log debug messages privately so regression tests can be tried.  */
  if (! pll_log)
    pll_log = fopen("/tmp/parallel.log", "a");
  fprintf (pll_log, "Pattern:");
  for (i = 0; i < len; i++)
      fprintf (pll_log, "  %c", isprint (s[i]) ? s[i] : ' ');
  fprintf (pll_log, "\n        ");
  for (i = 0; i < len; i++)
    fprintf (pll_log, " %02x", ((unsigned) s[i]) & 0xff);
  fprintf (pll_log, "\n");

  lasttok = END;
  laststart = true;
  parens = 0;
  if (MB_CUR_MAX > 1)
    {
      cur_mb_len = 0;
      memset (&mbs, 0, sizeof mbs);
    }

  if (!syntax_bits_set)
    dfaerror (_("no syntax specified"));

  tok = lex ();
  depth = d->depth;

  regexp ();

  if (tok != END)
    dfaerror (_("unbalanced )"));

  addtok (END - d->nregexps);
  addtok (CAT);

  if (d->nregexps)
    addtok (OR);

  ++d->nregexps;
}

/* Some primitives for operating on sets of positions.  */

/* Copy one set to another; the destination must be large enough.  */
static void
copy (position_set const *src, position_set * dst)
{
  REALLOC_IF_NECESSARY (dst->elems, dst->alloc, src->nelem);
  memcpy (dst->elems, src->elems, sizeof (dst->elems[0]) * src->nelem);
  dst->nelem = src->nelem;
}

static void
alloc_position_set (position_set * s, size_t size)
{
  MALLOC (s->elems, size);
  s->alloc = size;
  s->nelem = 0;
}

/* Insert position P in set S.  S is maintained in sorted order on
   decreasing index.  If there is already an entry in S with P.index
   then merge (logically-OR) P's constraints into the one in S.
   S->elems must point to an array large enough to hold the resulting set.  */
static void
insert (position p, position_set * s)
{
  size_t count = s->nelem;
  size_t lo = 0, hi = count;
  size_t i;
  while (lo < hi)
    {
      size_t mid = (lo + hi) >> 1;
      if (s->elems[mid].index > p.index)
        lo = mid + 1;
      else
        hi = mid;
    }

  if (lo < count && p.index == s->elems[lo].index)
    {
      s->elems[lo].constraint |= p.constraint;
      return;
    }

  REALLOC_IF_NECESSARY (s->elems, s->alloc, count + 1);
  for (i = count; i > lo; i--)
    s->elems[i] = s->elems[i - 1];
  s->elems[lo] = p;
  ++s->nelem;
}

/* Merge two sets of positions into a third.  The result is exactly as if
   the positions of both sets were inserted into an initially empty set.  */
static void
merge (position_set const *s1, position_set const *s2, position_set * m)
{
  size_t i = 0, j = 0;

  REALLOC_IF_NECESSARY (m->elems, m->alloc, s1->nelem + s2->nelem);
  m->nelem = 0;
  while (i < s1->nelem && j < s2->nelem)
    if (s1->elems[i].index > s2->elems[j].index)
      m->elems[m->nelem++] = s1->elems[i++];
    else if (s1->elems[i].index < s2->elems[j].index)
      m->elems[m->nelem++] = s2->elems[j++];
    else
      {
        m->elems[m->nelem] = s1->elems[i++];
        m->elems[m->nelem++].constraint |= s2->elems[j++].constraint;
      }
  while (i < s1->nelem)
    m->elems[m->nelem++] = s1->elems[i++];
  while (j < s2->nelem)
    m->elems[m->nelem++] = s2->elems[j++];
}

/* Delete a position from a set.  */
static void
delete (position p, position_set * s)
{
  size_t i;

  for (i = 0; i < s->nelem; ++i)
    if (p.index == s->elems[i].index)
      break;
  if (i < s->nelem)
    for (--s->nelem; i < s->nelem; ++i)
      s->elems[i] = s->elems[i + 1];
}

/* Find the index of the state corresponding to the given position set with
   the given preceding context, or create a new state if there is no such
   state.  Context tells whether we got here on a newline or letter.  */
static state_num
state_index (struct dfa *d, position_set const *s, int context)
{
  size_t hash = 0;
  int constraint;
  state_num i, j;

  for (i = 0; i < s->nelem; ++i)
    hash ^= s->elems[i].index + s->elems[i].constraint;

  /* Try to find a state that exactly matches the proposed one.  */
  for (i = 0; i < d->sindex; ++i)
    {
      if (hash != d->states[i].hash || s->nelem != d->states[i].elems.nelem
          || context != d->states[i].context)
        continue;
      for (j = 0; j < s->nelem; ++j)
        if (s->elems[j].constraint
            != d->states[i].elems.elems[j].constraint
            || s->elems[j].index != d->states[i].elems.elems[j].index)
          break;
      if (j == s->nelem)
        return i;
    }

  /* We'll have to create a new state.  */
  REALLOC_IF_NECESSARY (d->states, d->salloc, d->sindex + 1);
  d->states[i].hash = hash;
  alloc_position_set (&d->states[i].elems, s->nelem);
  copy (s, &d->states[i].elems);
  d->states[i].context = context;
  d->states[i].has_backref = false;
  d->states[i].has_mbcset = false;
  d->states[i].constraint = 0;
  d->states[i].first_end = 0;
  d->states[i].mbps.nelem = 0;
  d->states[i].mbps.elems = NULL;

  for (j = 0; j < s->nelem; ++j)
    if (d->tokens[s->elems[j].index] < 0)
      {
        constraint = s->elems[j].constraint;
        if (SUCCEEDS_IN_CONTEXT (constraint, context, CTX_ANY))
          d->states[i].constraint |= constraint;
        if (!d->states[i].first_end)
          d->states[i].first_end = d->tokens[s->elems[j].index];
      }
    else if (d->tokens[s->elems[j].index] == BACKREF)
      {
        d->states[i].constraint = NO_CONSTRAINT;
        d->states[i].has_backref = true;
      }

  ++d->sindex;

  return i;
}

/* Find the epsilon closure of a set of positions.  If any position of the set
   contains a symbol that matches the empty string in some context, replace
   that position with the elements of its follow labeled with an appropriate
   constraint.  Repeat exhaustively until no funny positions are left.
   S->elems must be large enough to hold the result.  */
static void
epsclosure (position_set * s, struct dfa const *d)
{
  size_t i, j;
  char *visited;  /* Array of booleans, enough to use char, not int.  */
  position p, old;

  CALLOC (visited, d->tindex);

  for (i = 0; i < s->nelem; ++i)
    if (d->tokens[s->elems[i].index] >= NOTCHAR
        && d->tokens[s->elems[i].index] != BACKREF
        && d->tokens[s->elems[i].index] != ANYCHAR
        && d->tokens[s->elems[i].index] != MBCSET
        && d->tokens[s->elems[i].index] < CSET)
      {
        old = s->elems[i];
        p.constraint = old.constraint;
        delete (s->elems[i], s);
        if (visited[old.index])
          {
            --i;
            continue;
          }
        visited[old.index] = 1;
        switch (d->tokens[old.index])
          {
          case BEGLINE:
            p.constraint &= BEGLINE_CONSTRAINT;
            break;
          case ENDLINE:
            p.constraint &= ENDLINE_CONSTRAINT;
            break;
          case BEGWORD:
            p.constraint &= BEGWORD_CONSTRAINT;
            break;
          case ENDWORD:
            p.constraint &= ENDWORD_CONSTRAINT;
            break;
          case LIMWORD:
            p.constraint &= LIMWORD_CONSTRAINT;
            break;
          case NOTLIMWORD:
            p.constraint &= NOTLIMWORD_CONSTRAINT;
            break;
          default:
            break;
          }
        for (j = 0; j < d->follows[old.index].nelem; ++j)
          {
            p.index = d->follows[old.index].elems[j].index;
            insert (p, s);
          }
        /* Force rescan to start at the beginning.  */
        i = -1;
      }

  free (visited);
}

/* Returns the set of contexts for which there is at least one
   character included in C.  */

static int
charclass_context (charclass c)
{
  int context = 0;
  unsigned int j;

  if (tstbit (eolbyte, c))
    context |= CTX_NEWLINE;

  for (j = 0; j < CHARCLASS_INTS; ++j)
    {
      if (c[j] & letters[j])
        context |= CTX_LETTER;
      if (c[j] & ~(letters[j] | newline[j]))
        context |= CTX_NONE;
    }

  return context;
}

/* Returns the contexts on which the position set S depends.  Each context
   in the set of returned contexts (let's call it SC) may have a different
   follow set than other contexts in SC, and also different from the
   follow set of the complement set (sc ^ CTX_ANY).  However, all contexts
   in the complement set will have the same follow set.  */

static int _GL_ATTRIBUTE_PURE
state_separate_contexts (position_set const *s)
{
  int separate_contexts = 0;
  size_t j;

  for (j = 0; j < s->nelem; ++j)
    {
      if (PREV_NEWLINE_DEPENDENT (s->elems[j].constraint))
        separate_contexts |= CTX_NEWLINE;
      if (PREV_LETTER_DEPENDENT (s->elems[j].constraint))
        separate_contexts |= CTX_LETTER;
    }

  return separate_contexts;
}


/* Perform bottom-up analysis on the parse tree, computing various functions.
   Note that at this point, we're pretending constructs like \< are real
   characters rather than constraints on what can follow them.

   Nullable:  A node is nullable if it is at the root of a regexp that can
   match the empty string.
   *  EMPTY leaves are nullable.
   * No other leaf is nullable.
   * A QMARK or STAR node is nullable.
   * A PLUS node is nullable if its argument is nullable.
   * A CAT node is nullable if both its arguments are nullable.
   * An OR node is nullable if either argument is nullable.

   Firstpos:  The firstpos of a node is the set of positions (nonempty leaves)
   that could correspond to the first character of a string matching the
   regexp rooted at the given node.
   * EMPTY leaves have empty firstpos.
   * The firstpos of a nonempty leaf is that leaf itself.
   * The firstpos of a QMARK, STAR, or PLUS node is the firstpos of its
     argument.
   * The firstpos of a CAT node is the firstpos of the left argument, union
     the firstpos of the right if the left argument is nullable.
   * The firstpos of an OR node is the union of firstpos of each argument.

   Lastpos:  The lastpos of a node is the set of positions that could
   correspond to the last character of a string matching the regexp at
   the given node.
   * EMPTY leaves have empty lastpos.
   * The lastpos of a nonempty leaf is that leaf itself.
   * The lastpos of a QMARK, STAR, or PLUS node is the lastpos of its
     argument.
   * The lastpos of a CAT node is the lastpos of its right argument, union
     the lastpos of the left if the right argument is nullable.
   * The lastpos of an OR node is the union of the lastpos of each argument.

   Follow:  The follow of a position is the set of positions that could
   correspond to the character following a character matching the node in
   a string matching the regexp.  At this point we consider special symbols
   that match the empty string in some context to be just normal characters.
   Later, if we find that a special symbol is in a follow set, we will
   replace it with the elements of its follow, labeled with an appropriate
   constraint.
   * Every node in the firstpos of the argument of a STAR or PLUS node is in
     the follow of every node in the lastpos.
   * Every node in the firstpos of the second argument of a CAT node is in
     the follow of every node in the lastpos of the first argument.

   Because of the postfix representation of the parse tree, the depth-first
   analysis is conveniently done by a linear scan with the aid of a stack.
   Sets are stored as arrays of the elements, obeying a stack-like allocation
   scheme; the number of elements in each set deeper in the stack can be
   used to determine the address of a particular set's array.  */
void
dfaanalyze (struct dfa *d, int searchflag)
{
  bool *nullable;               /* Nullable stack.  */
  size_t *nfirstpos;            /* Element count stack for firstpos sets.  */
  position *firstpos;           /* Array where firstpos elements are stored.  */
  size_t *nlastpos;             /* Element count stack for lastpos sets.  */
  position *lastpos;            /* Array where lastpos elements are stored.  */
  position_set tmp;             /* Temporary set for merging sets.  */
  position_set merged;          /* Result of merging sets.  */
  int separate_contexts;        /* Context wanted by some position.  */
  bool *o_nullable;
  size_t *o_nfirst, *o_nlast;
  position *o_firstpos, *o_lastpos;
  size_t i, j;
  position *pos;

#ifdef DEBUG
  fprintf (stderr, "dfaanalyze:\n");
  for (i = 0; i < d->tindex; ++i)
    {
      fprintf (stderr, " %zd:", i);
      prtok (d->tokens[i]);
    }
  putc ('\n', stderr);
#endif

  d->searchflag = searchflag != 0;

  MALLOC (nullable, d->depth);
  o_nullable = nullable;
  MALLOC (nfirstpos, d->depth);
  o_nfirst = nfirstpos;
  MALLOC (firstpos, d->nleaves);
  o_firstpos = firstpos, firstpos += d->nleaves;
  MALLOC (nlastpos, d->depth);
  o_nlast = nlastpos;
  MALLOC (lastpos, d->nleaves);
  o_lastpos = lastpos, lastpos += d->nleaves;
  alloc_position_set (&merged, d->nleaves);

  CALLOC (d->follows, d->tindex);

  for (i = 0; i < d->tindex; ++i)
    {
      switch (d->tokens[i])
        {
        case EMPTY:
          /* The empty set is nullable.  */
          *nullable++ = true;

          /* The firstpos and lastpos of the empty leaf are both empty.  */
          *nfirstpos++ = *nlastpos++ = 0;
          break;

        case STAR:
        case PLUS:
          /* Every element in the firstpos of the argument is in the follow
             of every element in the lastpos.  */
          tmp.nelem = nfirstpos[-1];
          tmp.elems = firstpos;
          pos = lastpos;
          for (j = 0; j < nlastpos[-1]; ++j)
            {
              merge (&tmp, &d->follows[pos[j].index], &merged);
              copy (&merged, &d->follows[pos[j].index]);
            }

        case QMARK:
          /* A QMARK or STAR node is automatically nullable.  */
          if (d->tokens[i] != PLUS)
            nullable[-1] = true;
          break;

        case CAT:
          /* Every element in the firstpos of the second argument is in the
             follow of every element in the lastpos of the first argument.  */
          tmp.nelem = nfirstpos[-1];
          tmp.elems = firstpos;
          pos = lastpos + nlastpos[-1];
          for (j = 0; j < nlastpos[-2]; ++j)
            {
              merge (&tmp, &d->follows[pos[j].index], &merged);
              copy (&merged, &d->follows[pos[j].index]);
            }

          /* The firstpos of a CAT node is the firstpos of the first argument,
             union that of the second argument if the first is nullable.  */
          if (nullable[-2])
            nfirstpos[-2] += nfirstpos[-1];
          else
            firstpos += nfirstpos[-1];
          --nfirstpos;

          /* The lastpos of a CAT node is the lastpos of the second argument,
             union that of the first argument if the second is nullable.  */
          if (nullable[-1])
            nlastpos[-2] += nlastpos[-1];
          else
            {
              pos = lastpos + nlastpos[-2];
              for (j = nlastpos[-1]; j-- > 0;)
                pos[j] = lastpos[j];
              lastpos += nlastpos[-2];
              nlastpos[-2] = nlastpos[-1];
            }
          --nlastpos;

          /* A CAT node is nullable if both arguments are nullable.  */
          nullable[-2] &= nullable[-1];
          --nullable;
          break;

        case OR:
          /* The firstpos is the union of the firstpos of each argument.  */
          nfirstpos[-2] += nfirstpos[-1];
          --nfirstpos;

          /* The lastpos is the union of the lastpos of each argument.  */
          nlastpos[-2] += nlastpos[-1];
          --nlastpos;

          /* An OR node is nullable if either argument is nullable.  */
          nullable[-2] |= nullable[-1];
          --nullable;
          break;

        default:
          /* Anything else is a nonempty position.  (Note that special
             constructs like \< are treated as nonempty strings here;
             an "epsilon closure" effectively makes them nullable later.
             Backreferences have to get a real position so we can detect
             transitions on them later.  But they are nullable.  */
          *nullable++ = d->tokens[i] == BACKREF;

          /* This position is in its own firstpos and lastpos.  */
          *nfirstpos++ = *nlastpos++ = 1;
          --firstpos, --lastpos;
          firstpos->index = lastpos->index = i;
          firstpos->constraint = lastpos->constraint = NO_CONSTRAINT;

          /* Allocate the follow set for this position.  */
          alloc_position_set (&d->follows[i], 1);
          break;
        }
#ifdef DEBUG
      /* ... balance the above nonsyntactic #ifdef goo...  */
      fprintf (stderr, "node %zd:", i);
      prtok (d->tokens[i]);
      putc ('\n', stderr);
      fprintf (stderr, nullable[-1] ? " nullable: yes\n" : " nullable: no\n");
      fprintf (stderr, " firstpos:");
      for (j = nfirstpos[-1]; j-- > 0;)
        {
          fprintf (stderr, " %zd:", firstpos[j].index);
          prtok (d->tokens[firstpos[j].index]);
        }
      fprintf (stderr, "\n lastpos:");
      for (j = nlastpos[-1]; j-- > 0;)
        {
          fprintf (stderr, " %zd:", lastpos[j].index);
          prtok (d->tokens[lastpos[j].index]);
        }
      putc ('\n', stderr);
#endif
    }

  /* For each follow set that is the follow set of a real position, replace
     it with its epsilon closure.  */
  for (i = 0; i < d->tindex; ++i)
    if (d->tokens[i] < NOTCHAR || d->tokens[i] == BACKREF
        || d->tokens[i] == ANYCHAR || d->tokens[i] == MBCSET
        || d->tokens[i] >= CSET)
      {
#ifdef DEBUG
        fprintf (stderr, "follows(%zd:", i);
        prtok (d->tokens[i]);
        fprintf (stderr, "):");
        for (j = d->follows[i].nelem; j-- > 0;)
          {
            fprintf (stderr, " %zd:", d->follows[i].elems[j].index);
            prtok (d->tokens[d->follows[i].elems[j].index]);
          }
        putc ('\n', stderr);
#endif
        copy (&d->follows[i], &merged);
        epsclosure (&merged, d);
        copy (&merged, &d->follows[i]);
      }

  /* Get the epsilon closure of the firstpos of the regexp.  The result will
     be the set of positions of state 0.  */
  merged.nelem = 0;
  for (i = 0; i < nfirstpos[-1]; ++i)
    insert (firstpos[i], &merged);
  epsclosure (&merged, d);

  /* Build the initial state.  */
  d->salloc = 1;
  d->sindex = 0;
  MALLOC (d->states, d->salloc);

  separate_contexts = state_separate_contexts (&merged);
  state_index (d, &merged,
               (separate_contexts & CTX_NEWLINE
                ? CTX_NEWLINE : separate_contexts ^ CTX_ANY));

  free (o_nullable);
  free (o_nfirst);
  free (o_firstpos);
  free (o_nlast);
  free (o_lastpos);
  free (merged.elems);
}


/* Find, for each character, the transition out of state s of d, and store
   it in the appropriate slot of trans.

   We divide the positions of s into groups (positions can appear in more
   than one group).  Each group is labeled with a set of characters that
   every position in the group matches (taking into account, if necessary,
   preceding context information of s).  For each group, find the union
   of the its elements' follows.  This set is the set of positions of the
   new state.  For each character in the group's label, set the transition
   on this character to be to a state corresponding to the set's positions,
   and its associated backward context information, if necessary.

   If we are building a searching matcher, we include the positions of state
   0 in every state.

   The collection of groups is constructed by building an equivalence-class
   partition of the positions of s.

   For each position, find the set of characters C that it matches.  Eliminate
   any characters from C that fail on grounds of backward context.

   Search through the groups, looking for a group whose label L has nonempty
   intersection with C.  If L - C is nonempty, create a new group labeled
   L - C and having the same positions as the current group, and set L to
   the intersection of L and C.  Insert the position in this group, set
   C = C - L, and resume scanning.

   If after comparing with every group there are characters remaining in C,
   create a new group labeled with the characters of C and insert this
   position in that group.  */
void
dfastate (state_num s, struct dfa *d, state_num trans[])
{
  leaf_set *grps;               /* As many as will ever be needed.  */
  charclass *labels;            /* Labels corresponding to the groups.  */
  size_t ngrps = 0;             /* Number of groups actually used.  */
  position pos;                 /* Current position being considered.  */
  charclass matches;            /* Set of matching characters.  */
  unsigned int matchesf;        /* Nonzero if matches is nonempty.  */
  charclass intersect;          /* Intersection with some label set.  */
  unsigned int intersectf;      /* Nonzero if intersect is nonempty.  */
  charclass leftovers;          /* Stuff in the label that didn't match.  */
  unsigned int leftoversf;      /* Nonzero if leftovers is nonempty.  */
  position_set follows;         /* Union of the follows of some group.  */
  position_set tmp;             /* Temporary space for merging sets.  */
  int possible_contexts;        /* Contexts that this group can match.  */
  int separate_contexts;        /* Context that new state wants to know.  */
  state_num state;              /* New state.  */
  state_num state_newline;      /* New state on a newline transition.  */
  state_num state_letter;       /* New state on a letter transition.  */
  bool next_isnt_1st_byte = false; /* Flag if we can't add state0.  */
  size_t i, j, k;

  MALLOC (grps, NOTCHAR);
  MALLOC (labels, NOTCHAR);

  zeroset (matches);

  for (i = 0; i < d->states[s].elems.nelem; ++i)
    {
      pos = d->states[s].elems.elems[i];
      if (d->tokens[pos.index] >= 0 && d->tokens[pos.index] < NOTCHAR)
        setbit (d->tokens[pos.index], matches);
      else if (d->tokens[pos.index] >= CSET)
        copyset (d->charclasses[d->tokens[pos.index] - CSET], matches);
      else
        {
          if (d->tokens[pos.index] == MBCSET
              || d->tokens[pos.index] == ANYCHAR)
            {
              /* MB_CUR_MAX > 1 */
              if (d->tokens[pos.index] == MBCSET)
                d->states[s].has_mbcset = true;
              /* ANYCHAR and MBCSET must match with a single character, so we
                 must put it to d->states[s].mbps, which contains the positions
                 which can match with a single character not a byte.  */
              if (d->states[s].mbps.nelem == 0)
                alloc_position_set (&d->states[s].mbps, 1);
              insert (pos, &(d->states[s].mbps));
            }
          continue;
        }

      /* Some characters may need to be eliminated from matches because
         they fail in the current context.  */
      if (pos.constraint != NO_CONSTRAINT)
        {
          if (!SUCCEEDS_IN_CONTEXT (pos.constraint,
                                    d->states[s].context, CTX_NEWLINE))
            for (j = 0; j < CHARCLASS_INTS; ++j)
              matches[j] &= ~newline[j];
          if (!SUCCEEDS_IN_CONTEXT (pos.constraint,
                                    d->states[s].context, CTX_LETTER))
            for (j = 0; j < CHARCLASS_INTS; ++j)
              matches[j] &= ~letters[j];
          if (!SUCCEEDS_IN_CONTEXT (pos.constraint,
                                    d->states[s].context, CTX_NONE))
            for (j = 0; j < CHARCLASS_INTS; ++j)
              matches[j] &= letters[j] | newline[j];

          /* If there are no characters left, there's no point in going on.  */
          for (j = 0; j < CHARCLASS_INTS && !matches[j]; ++j)
            continue;
          if (j == CHARCLASS_INTS)
            continue;
        }

      for (j = 0; j < ngrps; ++j)
        {
          /* If matches contains a single character only, and the current
             group's label doesn't contain that character, go on to the
             next group.  */
          if (d->tokens[pos.index] >= 0 && d->tokens[pos.index] < NOTCHAR
              && !tstbit (d->tokens[pos.index], labels[j]))
            continue;

          /* Check if this group's label has a nonempty intersection with
             matches.  */
          intersectf = 0;
          for (k = 0; k < CHARCLASS_INTS; ++k)
            intersectf |= intersect[k] = matches[k] & labels[j][k];
          if (!intersectf)
            continue;

          /* It does; now find the set differences both ways.  */
          leftoversf = matchesf = 0;
          for (k = 0; k < CHARCLASS_INTS; ++k)
            {
              /* Even an optimizing compiler can't know this for sure.  */
              int match = matches[k], label = labels[j][k];

              leftoversf |= leftovers[k] = ~match & label;
              matchesf |= matches[k] = match & ~label;
            }

          /* If there were leftovers, create a new group labeled with them.  */
          if (leftoversf)
            {
              copyset (leftovers, labels[ngrps]);
              copyset (intersect, labels[j]);
              MALLOC (grps[ngrps].elems, d->nleaves);
              memcpy (grps[ngrps].elems, grps[j].elems,
                      sizeof (grps[j].elems[0]) * grps[j].nelem);
              grps[ngrps].nelem = grps[j].nelem;
              ++ngrps;
            }

          /* Put the position in the current group.  The constraint is
             irrelevant here.  */
          grps[j].elems[grps[j].nelem++] = pos.index;

          /* If every character matching the current position has been
             accounted for, we're done.  */
          if (!matchesf)
            break;
        }

      /* If we've passed the last group, and there are still characters
         unaccounted for, then we'll have to create a new group.  */
      if (j == ngrps)
        {
          copyset (matches, labels[ngrps]);
          zeroset (matches);
          MALLOC (grps[ngrps].elems, d->nleaves);
          grps[ngrps].nelem = 1;
          grps[ngrps].elems[0] = pos.index;
          ++ngrps;
        }
    }

  alloc_position_set (&follows, d->nleaves);
  alloc_position_set (&tmp, d->nleaves);

  /* If we are a searching matcher, the default transition is to a state
     containing the positions of state 0, otherwise the default transition
     is to fail miserably.  */
  if (d->searchflag)
    {
      /* Find the state(s) corresponding to the positions of state 0.  */
      copy (&d->states[0].elems, &follows);
      separate_contexts = state_separate_contexts (&follows);
      state = state_index (d, &follows, separate_contexts ^ CTX_ANY);
      if (separate_contexts & CTX_NEWLINE)
        state_newline = state_index (d, &follows, CTX_NEWLINE);
      else
        state_newline = state;
      if (separate_contexts & CTX_LETTER)
        state_letter = state_index (d, &follows, CTX_LETTER);
      else
        state_letter = state;

      for (i = 0; i < NOTCHAR; ++i)
        trans[i] = (IS_WORD_CONSTITUENT (i)) ? state_letter : state;
      trans[eolbyte] = state_newline;
    }
  else
    for (i = 0; i < NOTCHAR; ++i)
      trans[i] = -1;

  for (i = 0; i < ngrps; ++i)
    {
      follows.nelem = 0;

      /* Find the union of the follows of the positions of the group.
         This is a hideously inefficient loop.  Fix it someday.  */
      for (j = 0; j < grps[i].nelem; ++j)
        for (k = 0; k < d->follows[grps[i].elems[j]].nelem; ++k)
          insert (d->follows[grps[i].elems[j]].elems[k], &follows);

      if (d->mb_cur_max > 1)
        {
          /* If a token in follows.elems is not 1st byte of a multibyte
             character, or the states of follows must accept the bytes
             which are not 1st byte of the multibyte character.
             Then, if a state of follows encounter a byte, it must not be
             a 1st byte of a multibyte character nor single byte character.
             We cansel to add state[0].follows to next state, because
             state[0] must accept 1st-byte

             For example, we assume <sb a> is a certain single byte
             character, <mb A> is a certain multibyte character, and the
             codepoint of <sb a> equals the 2nd byte of the codepoint of
             <mb A>.
             When state[0] accepts <sb a>, state[i] transit to state[i+1]
             by accepting accepts 1st byte of <mb A>, and state[i+1]
             accepts 2nd byte of <mb A>, if state[i+1] encounter the
             codepoint of <sb a>, it must not be <sb a> but 2nd byte of
             <mb A>, so we cannot add state[0].  */

          next_isnt_1st_byte = false;
          for (j = 0; j < follows.nelem; ++j)
            {
              if (!(d->multibyte_prop[follows.elems[j].index] & 1))
                {
                  next_isnt_1st_byte = true;
                  break;
                }
            }
        }

      /* If we are building a searching matcher, throw in the positions
         of state 0 as well.  */
      if (d->searchflag
          && (d->mb_cur_max == 1 || !next_isnt_1st_byte))
        for (j = 0; j < d->states[0].elems.nelem; ++j)
          insert (d->states[0].elems.elems[j], &follows);

      /* Find out if the new state will want any context information.  */
      possible_contexts = charclass_context (labels[i]);
      separate_contexts = state_separate_contexts (&follows);

      /* Find the state(s) corresponding to the union of the follows.  */
      if ((separate_contexts & possible_contexts) != possible_contexts)
        state = state_index (d, &follows, separate_contexts ^ CTX_ANY);
      else
        state = -1;
      if (separate_contexts & possible_contexts & CTX_NEWLINE)
        state_newline = state_index (d, &follows, CTX_NEWLINE);
      else
        state_newline = state;
      if (separate_contexts & possible_contexts & CTX_LETTER)
        state_letter = state_index (d, &follows, CTX_LETTER);
      else
        state_letter = state;

      /* Set the transitions for each character in the current label.  */
      for (j = 0; j < CHARCLASS_INTS; ++j)
        for (k = 0; k < INTBITS; ++k)
          if (labels[i][j] & 1U << k)
            {
              int c = j * INTBITS + k;

              if (c == eolbyte)
                trans[c] = state_newline;
              else if (IS_WORD_CONSTITUENT (c))
                trans[c] = state_letter;
              else if (c < NOTCHAR)
                trans[c] = state;
            }
    }

  for (i = 0; i < ngrps; ++i)
    free (grps[i].elems);
  free (follows.elems);
  free (tmp.elems);
  free (grps);
  free (labels);
}

/* Some routines for manipulating a compiled dfa's transition tables.
   Each state may or may not have a transition table; if it does, and it
   is a non-accepting state, then d->trans[state] points to its table.
   If it is an accepting state then d->fails[state] points to its table.
   If it has no table at all, then d->trans[state] is NULL.
   TODO: Improve this comment, get rid of the unnecessary redundancy.  */

static void
build_state (state_num s, struct dfa *d)
{
  state_num *trans;             /* The new transition table.  */
  state_num i;

  /* Set an upper limit on the number of transition tables that will ever
     exist at once.  1024 is arbitrary.  The idea is that the frequently
     used transition tables will be quickly rebuilt, whereas the ones that
     were only needed once or twice will be cleared away.  */
  if (d->trcount >= 1024)
    {
      for (i = 0; i < d->tralloc; ++i)
        {
          free (d->trans[i]);
          free (d->fails[i]);
          d->trans[i] = d->fails[i] = NULL;
        }
      d->trcount = 0;
    }

  ++d->trcount;

  /* Set up the success bits for this state.  */
  d->success[s] = 0;
  if (ACCEPTS_IN_CONTEXT (d->states[s].context, CTX_NEWLINE, s, *d))
    d->success[s] |= CTX_NEWLINE;
  if (ACCEPTS_IN_CONTEXT (d->states[s].context, CTX_LETTER, s, *d))
    d->success[s] |= CTX_LETTER;
  if (ACCEPTS_IN_CONTEXT (d->states[s].context, CTX_NONE, s, *d))
    d->success[s] |= CTX_NONE;

  MALLOC (trans, NOTCHAR);
  dfastate (s, d, trans);

  /* Now go through the new transition table, and make sure that the trans
     and fail arrays are allocated large enough to hold a pointer for the
     largest state mentioned in the table.  */
  for (i = 0; i < NOTCHAR; ++i)
    if (trans[i] >= d->tralloc)
      {
        state_num oldalloc = d->tralloc;

        while (trans[i] >= d->tralloc)
          d->tralloc *= 2;
        REALLOC (d->realtrans, d->tralloc + 1);
        d->trans = d->realtrans + 1;
        REALLOC (d->fails, d->tralloc);
        REALLOC (d->success, d->tralloc);
        REALLOC (d->newlines, d->tralloc);
        while (oldalloc < d->tralloc)
          {
            d->trans[oldalloc] = NULL;
            d->fails[oldalloc++] = NULL;
          }
      }

  /* Keep the newline transition in a special place so we can use it as
     a sentinel.  */
  d->newlines[s] = trans[eolbyte];
  trans[eolbyte] = -1;

  if (ACCEPTING (s, *d))
    d->fails[s] = trans;
  else
    d->trans[s] = trans;
}

static void
build_state_zero (struct dfa *d)
{
  d->tralloc = 1;
  d->trcount = 0;
  CALLOC (d->realtrans, d->tralloc + 1);
  d->trans = d->realtrans + 1;
  CALLOC (d->fails, d->tralloc);
  MALLOC (d->success, d->tralloc);
  MALLOC (d->newlines, d->tralloc);
  build_state (0, d);
}

/* Multibyte character handling sub-routines for dfaexec.  */

/* The initial state may encounter a byte which is not a single byte character
   nor the first byte of a multibyte character.  But it is incorrect for the
   initial state to accept such a byte.  For example, in Shift JIS the regular
   expression "\\" accepts the codepoint 0x5c, but should not accept the second
   byte of the codepoint 0x815c.  Then the initial state must skip the bytes
   that are not a single byte character nor the first byte of a multibyte
   character.  */
#define SKIP_REMAINS_MB_IF_INITIAL_STATE(s, p)		\
  if (s == 0)						\
    {							\
      while (d->inputwcs[p - buf_begin] == 0		\
             && d->mblen_buf[p - buf_begin] != 0	\
             && (unsigned char const *) p < buf_end)	\
        ++p;						\
      if ((char *) p >= end)				\
        {						\
          *end = saved_end;				\
          return NULL;					\
        }						\
    }

static void
realloc_trans_if_necessary (struct dfa *d, state_num new_state)
{
  /* Make sure that the trans and fail arrays are allocated large enough
     to hold a pointer for the new state.  */
  if (new_state >= d->tralloc)
    {
      state_num oldalloc = d->tralloc;

      while (new_state >= d->tralloc)
        d->tralloc *= 2;
      REALLOC (d->realtrans, d->tralloc + 1);
      d->trans = d->realtrans + 1;
      REALLOC (d->fails, d->tralloc);
      REALLOC (d->success, d->tralloc);
      REALLOC (d->newlines, d->tralloc);
      while (oldalloc < d->tralloc)
        {
          d->trans[oldalloc] = NULL;
          d->fails[oldalloc++] = NULL;
        }
    }
}

/* Return values of transit_state_singlebyte, and
   transit_state_consume_1char.  */
typedef enum
{
  TRANSIT_STATE_IN_PROGRESS,    /* State transition has not finished.  */
  TRANSIT_STATE_DONE,           /* State transition has finished.  */
  TRANSIT_STATE_END_BUFFER      /* Reach the end of the buffer.  */
} status_transit_state;

/* Consume a single byte and transit state from 's' to '*next_state'.
   This function is almost same as the state transition routin in dfaexec.
   But state transition is done just once, otherwise matching succeed or
   reach the end of the buffer.  */
static status_transit_state
transit_state_singlebyte (struct dfa *d, state_num s, unsigned char const *p,
                          state_num * next_state)
{
  state_num *t;
  state_num works = s;

  status_transit_state rval = TRANSIT_STATE_IN_PROGRESS;

  while (rval == TRANSIT_STATE_IN_PROGRESS)
    {
      if ((t = d->trans[works]) != NULL)
        {
          works = t[*p];
          rval = TRANSIT_STATE_DONE;
          if (works < 0)
            works = 0;
        }
      else if (works < 0)
        {
          if (p == buf_end)
            {
              /* At the moment, it must not happen.  */
              abort ();
            }
          works = 0;
        }
      else if (d->fails[works])
        {
          works = d->fails[works][*p];
          rval = TRANSIT_STATE_DONE;
        }
      else
        {
          build_state (works, d);
        }
    }
  *next_state = works;
  return rval;
}

/* Match a "." against the current context.  buf_begin[IDX] is the
   current position.  Return the length of the match, in bytes.
   POS is the position of the ".".  */
static int
match_anychar (struct dfa *d, state_num s, position pos, size_t idx)
{
  int context;
  wchar_t wc;
  int mbclen;

  wc = d->inputwcs[idx];
  mbclen = MAX (1, d->mblen_buf[idx]);

  /* Check syntax bits.  */
  if (wc == (wchar_t) eolbyte)
    {
      if (!(syntax_bits & RE_DOT_NEWLINE))
        return 0;
    }
  else if (wc == (wchar_t) '\0')
    {
      if (syntax_bits & RE_DOT_NOT_NULL)
        return 0;
    }

  context = wchar_context (wc);
  if (!SUCCEEDS_IN_CONTEXT (pos.constraint, d->states[s].context, context))
    return 0;

  return mbclen;
}

/* Match a bracket expression against the current context.
   buf_begin[IDX] is the current position.
   Return the length of the match, in bytes.
   POS is the position of the bracket expression.  */
static int
match_mb_charset (struct dfa *d, state_num s, position pos, size_t idx)
{
  size_t i;
  bool match;              /* Matching succeeded.  */
  int match_len;           /* Length of the character (or collating element)
                              with which this operator matches.  */
  int op_len;              /* Length of the operator.  */
  char buffer[128];

  /* Pointer to the structure to which we are currently referring.  */
  struct mb_char_classes *work_mbc;

  int context;
  wchar_t wc;                   /* Current referring character.  */

  wc = d->inputwcs[idx];

  /* Check syntax bits.  */
  if (wc == (wchar_t) eolbyte)
    {
      if (!(syntax_bits & RE_DOT_NEWLINE))
        return 0;
    }
  else if (wc == (wchar_t) '\0')
    {
      if (syntax_bits & RE_DOT_NOT_NULL)
        return 0;
    }

  context = wchar_context (wc);
  if (!SUCCEEDS_IN_CONTEXT (pos.constraint, d->states[s].context, context))
    return 0;

  /* Assign the current referring operator to work_mbc.  */
  work_mbc = &(d->mbcsets[(d->multibyte_prop[pos.index]) >> 2]);
  match = !work_mbc->invert;
  match_len = MAX (1, d->mblen_buf[idx]);

  /* Match in range 0-255?  */
  if (wc < NOTCHAR && work_mbc->cset != -1
      && tstbit (to_uchar (wc), d->charclasses[work_mbc->cset]))
    goto charset_matched;

  /* match with a character class?  */
  for (i = 0; i < work_mbc->nch_classes; i++)
    {
      if (iswctype ((wint_t) wc, work_mbc->ch_classes[i]))
        goto charset_matched;
    }

  strncpy (buffer, (char const *) buf_begin + idx, match_len);
  buffer[match_len] = '\0';

  /* match with an equivalence class?  */
  for (i = 0; i < work_mbc->nequivs; i++)
    {
      op_len = strlen (work_mbc->equivs[i]);
      strncpy (buffer, (char const *) buf_begin + idx, op_len);
      buffer[op_len] = '\0';
      if (strcoll (work_mbc->equivs[i], buffer) == 0)
        {
          match_len = op_len;
          goto charset_matched;
        }
    }

  /* match with a collating element?  */
  for (i = 0; i < work_mbc->ncoll_elems; i++)
    {
      op_len = strlen (work_mbc->coll_elems[i]);
      strncpy (buffer, (char const *) buf_begin + idx, op_len);
      buffer[op_len] = '\0';

      if (strcoll (work_mbc->coll_elems[i], buffer) == 0)
        {
          match_len = op_len;
          goto charset_matched;
        }
    }

  /* match with a range?  */
  for (i = 0; i < work_mbc->nranges; i++)
    {
      if (work_mbc->range_sts[i] <= wc && wc <= work_mbc->range_ends[i])
        goto charset_matched;
    }

  /* match with a character?  */
  for (i = 0; i < work_mbc->nchars; i++)
    {
      if (wc == work_mbc->chars[i])
        goto charset_matched;
    }

  match = !match;

charset_matched:
  return match ? match_len : 0;
}

/* Check whether each of 'd->states[s].mbps.elem' can match.  Then return the
   array which corresponds to 'd->states[s].mbps.elem'; each element of the
   array contains the number of bytes with which the element can match.

   'idx' is the index from buf_begin, and it is the current position
   in the buffer.

   The caller MUST free the array which this function return.  */
static int *
check_matching_with_multibyte_ops (struct dfa *d, state_num s, size_t idx)
{
  size_t i;
  int *rarray;

  rarray = d->mb_match_lens;
  for (i = 0; i < d->states[s].mbps.nelem; ++i)
    {
      position pos = d->states[s].mbps.elems[i];
      switch (d->tokens[pos.index])
        {
        case ANYCHAR:
          rarray[i] = match_anychar (d, s, pos, idx);
          break;
        case MBCSET:
          rarray[i] = match_mb_charset (d, s, pos, idx);
          break;
        default:
          break;                /* cannot happen.  */
        }
    }
  return rarray;
}

/* Consume a single character and enumerate all of the positions which can
   be the next position from the state 's'.

   'match_lens' is the input.  It can be NULL, but it can also be the output
   of check_matching_with_multibyte_ops for optimization.

   'mbclen' and 'pps' are the output.  'mbclen' is the length of the
   character consumed, and 'pps' is the set this function enumerates.  */
static status_transit_state
transit_state_consume_1char (struct dfa *d, state_num s,
                             unsigned char const **pp,
                             int *match_lens, int *mbclen)
{
  size_t i, j;
  int k;
  state_num s1, s2;
  int *work_mbls;
  status_transit_state rs = TRANSIT_STATE_DONE;

  /* Calculate the length of the (single/multi byte) character
     to which p points.  */
  *mbclen = MAX (1, d->mblen_buf[*pp - buf_begin]);

  /* Calculate the state which can be reached from the state 's' by
     consuming '*mbclen' single bytes from the buffer.  */
  s1 = s;
  for (k = 0; k < *mbclen; k++)
    {
      s2 = s1;
      rs = transit_state_singlebyte (d, s2, (*pp)++, &s1);
    }
  /* Copy the positions contained by 's1' to the set 'd->mb_follows'.  */
  copy (&(d->states[s1].elems), d->mb_follows);

  /* Check (input) match_lens, and initialize if it is NULL.  */
  if (match_lens == NULL && d->states[s].mbps.nelem != 0)
    work_mbls = check_matching_with_multibyte_ops (d, s, *pp - buf_begin);
  else
    work_mbls = match_lens;

  /* Add all of the positions which can be reached from 's' by consuming
     a single character.  */
  for (i = 0; i < d->states[s].mbps.nelem; i++)
    {
      if (work_mbls[i] == *mbclen)
        for (j = 0; j < d->follows[d->states[s].mbps.elems[i].index].nelem;
             j++)
          insert (d->follows[d->states[s].mbps.elems[i].index].elems[j],
                  d->mb_follows);
    }

  /* FIXME: this return value is always ignored.  */
  return rs;
}

/* Transit state from s, then return new state and update the pointer of the
   buffer.  This function is for some operator which can match with a multi-
   byte character or a collating element (which may be multi characters).  */
static state_num
transit_state (struct dfa *d, state_num s, unsigned char const **pp)
{
  state_num s1;
  int mbclen;  /* The length of current input multibyte character.  */
  int maxlen = 0;
  size_t i, j;
  int *match_lens = NULL;
  size_t nelem = d->states[s].mbps.nelem;       /* Just a alias.  */
  unsigned char const *p1 = *pp;
  wchar_t wc;

  if (nelem > 0)
    /* This state has (a) multibyte operator(s).
       We check whether each of them can match or not.  */
    {
      /* Note: caller must free the return value of this function.  */
      match_lens = check_matching_with_multibyte_ops (d, s, *pp - buf_begin);

      for (i = 0; i < nelem; i++)
        /* Search the operator which match the longest string,
           in this state.  */
        {
          if (match_lens[i] > maxlen)
            maxlen = match_lens[i];
        }
    }

  if (nelem == 0 || maxlen == 0)
    /* This state has no multibyte operator which can match.
       We need to check only one single byte character.  */
    {
      status_transit_state rs;
      rs = transit_state_singlebyte (d, s, *pp, &s1);

      /* We must update the pointer if state transition succeeded.  */
      if (rs == TRANSIT_STATE_DONE)
        ++*pp;

      return s1;
    }

  /* This state has some operators which can match a multibyte character.  */
  d->mb_follows->nelem = 0;

  /* 'maxlen' may be longer than the length of a character, because it may
     not be a character but a (multi character) collating element.
     We enumerate all of the positions which 's' can reach by consuming
     'maxlen' bytes.  */
  transit_state_consume_1char (d, s, pp, match_lens, &mbclen);

  wc = d->inputwcs[*pp - mbclen - buf_begin];
  s1 = state_index (d, d->mb_follows, wchar_context (wc));
  realloc_trans_if_necessary (d, s1);

  while (*pp - p1 < maxlen)
    {
      transit_state_consume_1char (d, s1, pp, NULL, &mbclen);

      for (i = 0; i < nelem; i++)
        {
          if (match_lens[i] == *pp - p1)
            for (j = 0;
                 j < d->follows[d->states[s1].mbps.elems[i].index].nelem; j++)
              insert (d->follows[d->states[s1].mbps.elems[i].index].elems[j],
                      d->mb_follows);
        }

      wc = d->inputwcs[*pp - mbclen - buf_begin];
      s1 = state_index (d, d->mb_follows, wchar_context (wc));
      realloc_trans_if_necessary (d, s1);
    }
  return s1;
}


/* Initialize mblen_buf and inputwcs with data from the next line.  */

static void
prepare_wc_buf (struct dfa *d, const char *begin, const char *end)
{
  unsigned char eol = eolbyte;
  size_t i;
  size_t ilim = end - begin + 1;

  buf_begin = (unsigned char *) begin;

  for (i = 0; i < ilim; i++)
    {
      size_t nbytes = mbs_to_wchar (d, d->inputwcs + i, begin + i, ilim - i,
                                    &mbs);
      d->mblen_buf[i] = nbytes - (nbytes == 1);
      if (begin[i] == eol)
        break;
      while (--nbytes != 0)
        {
          i++;
          d->mblen_buf[i] = nbytes;
          d->inputwcs[i] = 0;
        }
    }

  buf_end = (unsigned char *) (begin + i);
  d->mblen_buf[i] = 0;
  d->inputwcs[i] = 0;              /* sentinel */
}

/* Search through a buffer looking for a match to the given struct dfa.
   Find the first occurrence of a string matching the regexp in the
   buffer, and the shortest possible version thereof.  Return a pointer to
   the first character after the match, or NULL if none is found.  BEGIN
   points to the beginning of the buffer, and END points to the first byte
   after its end.  Note however that we store a sentinel byte (usually
   newline) in *END, so the actual buffer must be one byte longer.
   When ALLOW_NL is nonzero, newlines may appear in the matching string.
   If COUNT is non-NULL, increment *COUNT once for each newline processed.
   Finally, if BACKREF is non-NULL set *BACKREF to indicate whether we
   encountered a back-reference (1) or not (0).  The caller may use this
   to decide whether to fall back on a backtracking matcher.  */
char *
dfaexec (struct dfa *d, char const *begin, char *end,
         int allow_nl, size_t *count, int *backref)
{
  state_num s, s1;              /* Current state.  */
  unsigned char const *p;       /* Current input character.  */
  state_num **trans, *t;        /* Copy of d->trans so it can be optimized
                                   into a register.  */
  unsigned char eol = eolbyte;  /* Likewise for eolbyte.  */
  unsigned char saved_end;

  if (!d->tralloc)
    build_state_zero (d);

  s = s1 = 0;
  p = (unsigned char const *) begin;
  trans = d->trans;
  saved_end = *(unsigned char *) end;
  *end = eol;

  if (d->mb_cur_max > 1)
    {
      static bool mb_alloc = false;
      REALLOC_IF_NECESSARY (d->mblen_buf, d->nmblen_buf, end - begin + 2);
      REALLOC_IF_NECESSARY (d->inputwcs, d->ninputwcs, end - begin + 2);
      memset (&mbs, 0, sizeof (mbstate_t));
      prepare_wc_buf (d, (const char *) p, end);
      if (!mb_alloc)
        {
          MALLOC (d->mb_match_lens, d->nleaves);
          MALLOC (d->mb_follows, 1);
          alloc_position_set (d->mb_follows, d->nleaves);
          mb_alloc = true;
        }
    }

  for (;;)
    {
      if (d->mb_cur_max > 1)
        {
          while ((t = trans[s]) != NULL)
            {
              if (p > buf_end)
                break;
              s1 = s;
              SKIP_REMAINS_MB_IF_INITIAL_STATE (s, p);

              if (d->states[s].mbps.nelem == 0)
                {
                  s = t[*p++];
                  continue;
                }

              /* Falling back to the glibc matcher in this case gives
                 better performance (up to 25% better on [a-z], for
                 example) and enables support for collating symbols and
                 equivalence classes.  */
              if (d->states[s].has_mbcset && backref)
                {
                  *backref = 1;
                  *end = saved_end;
                  return (char *) p;
                }

              /* Can match with a multibyte character (and multi character
                 collating element).  Transition table might be updated.  */
              s = transit_state (d, s, &p);
              trans = d->trans;
            }
        }
      else
        {
          while ((t = trans[s]) != NULL)
            {
              s1 = t[*p++];
              if ((t = trans[s1]) == NULL)
                {
                  state_num tmp = s;
                  s = s1;
                  s1 = tmp;     /* swap */
                  break;
                }
              s = t[*p++];
            }
        }

      if (s >= 0 && (char *) p <= end && d->fails[s])
        {
          if (d->success[s] & sbit[*p])
            {
              if (backref)
                *backref = d->states[s].has_backref;
              *end = saved_end;
              return (char *) p;
            }

          s1 = s;
          if (d->mb_cur_max > 1)
            {
              /* Can match with a multibyte character (and multicharacter
                 collating element).  Transition table might be updated.  */
              s = transit_state (d, s, &p);
              trans = d->trans;
            }
          else
            s = d->fails[s][*p++];
          continue;
        }

      /* If the previous character was a newline, count it.  */
      if ((char *) p <= end && p[-1] == eol)
        {
          if (count)
            ++*count;

          if (d->mb_cur_max > 1)
            prepare_wc_buf (d, (const char *) p, end);
        }

      /* Check if we've run off the end of the buffer.  */
      if ((char *) p > end)
        {
          *end = saved_end;
          return NULL;
        }

      if (s >= 0)
        {
          if (!d->trans[s])
            build_state (s, d);
          trans = d->trans;
          continue;
        }

      if (p[-1] == eol && allow_nl)
        {
          s = d->newlines[s1];
          continue;
        }

      s = 0;
    }
}

/* Search through a buffer looking for a potential match for D.
   Return the offset of the byte after the first potential match.
   If there is no match, return (size_t) -1.  If D lacks a superset
   so it's not known whether there is a match, return (size_t) -2.
   BEGIN points to the beginning of the buffer, and END points to the
   first byte after its end.  Store a sentinel byte (usually newline)
   in *END, so the actual buffer must be one byte longer.  If COUNT is
   non-NULL, increment *COUNT once for each newline processed.  */
size_t
dfahint (struct dfa *d, char const *begin, char *end, size_t *count)
{
  if (! d->superset)
    return -2;
  else
    {
      char const *match = dfaexec (d->superset, begin, end, 1, count, NULL);
      return match ? match - begin : -1;
    }
}

static void
free_mbdata (struct dfa *d)
{
  size_t i;

  free (d->multibyte_prop);
  d->multibyte_prop = NULL;

  for (i = 0; i < d->nmbcsets; ++i)
    {
      size_t j;
      struct mb_char_classes *p = &(d->mbcsets[i]);
      free (p->chars);
      free (p->ch_classes);
      free (p->range_sts);
      free (p->range_ends);

      for (j = 0; j < p->nequivs; ++j)
        free (p->equivs[j]);
      free (p->equivs);

      for (j = 0; j < p->ncoll_elems; ++j)
        free (p->coll_elems[j]);
      free (p->coll_elems);
    }

  free (d->mbcsets);
  d->mbcsets = NULL;
  d->nmbcsets = 0;

  free (d->mblen_buf);
  free (d->inputwcs);
  if (d->mb_follows)
    {
      free (d->mb_follows->elems);
      free (d->mb_follows);
    }
  free (d->mb_match_lens);
}

/* Initialize the components of a dfa that the other routines don't
   initialize for themselves.  */
void
dfainit (struct dfa *d)
{
  memset (d, 0, sizeof *d);

  d->calloc = 1;
  MALLOC (d->charclasses, d->calloc);

  d->talloc = 1;
  MALLOC (d->tokens, d->talloc);

  d->mb_cur_max = MB_CUR_MAX;

  if (d->mb_cur_max > 1)
    {
      d->nmultibyte_prop = 1;
      MALLOC (d->multibyte_prop, d->nmultibyte_prop);
      d->mbcsets_alloc = 1;
      MALLOC (d->mbcsets, d->mbcsets_alloc);
    }
}

static void
dfaoptimize (struct dfa *d)
{
  size_t i;

  if (!using_utf8 ())
    return;

  for (i = 0; i < d->tindex; ++i)
    {
      switch (d->tokens[i])
        {
        case ANYCHAR:
          /* Lowered.  */
          abort ();
        case MBCSET:
          /* Requires multi-byte algorithm.  */
          return;
        default:
          break;
        }
    }

  free_mbdata (d);
  d->mb_cur_max = 1;
}

static void
dfasuperset (struct dfa *d)
{
  size_t i, j;
  charclass ccl;
  bool have_achar = false;
  bool have_nchar = false;
  struct dfa *sup = dfaalloc ();

  *sup = *d;
  sup->mb_cur_max = 1;
  sup->multibyte_prop = NULL;
  sup->mbcsets = NULL;
  sup->superset = NULL;
  sup->states = NULL;
  sup->sindex = 0;
  sup->follows = NULL;
  sup->tralloc = 0;
  sup->realtrans = NULL;
  sup->fails = NULL;
  sup->success = NULL;
  sup->newlines = NULL;
  sup->musts = NULL;

  MALLOC (sup->charclasses, sup->calloc);
  memcpy (sup->charclasses, d->charclasses,
          d->cindex * sizeof *sup->charclasses);

  sup->talloc = d->tindex * 2;
  MALLOC (sup->tokens, sup->talloc);

  for (i = j = 0; i < d->tindex; i++)
    {
      switch (d->tokens[i])
        {
        case ANYCHAR:
        case MBCSET:
        case BACKREF:
          zeroset (ccl);
          notset (ccl);
          sup->tokens[j++] = CSET + dfa_charclass_index (sup, ccl);
          sup->tokens[j++] = STAR;
          if (d->tokens[i + 1] == QMARK || d->tokens[i + 1] == STAR
              || d->tokens[i + 1] == PLUS)
            i++;
          have_achar = true;
          break;
        case BEGWORD:
        case ENDWORD:
        case LIMWORD:
        case NOTLIMWORD:
          if (MB_CUR_MAX > 1)
            {
              /* Ignore these constraints.  */
              sup->tokens[j++] = EMPTY;
              break;
            }
        default:
          sup->tokens[j++] = d->tokens[i];
          if ((0 <= d->tokens[i] && d->tokens[i] < NOTCHAR)
              || d->tokens[i] >= CSET)
            have_nchar = true;
          break;
        }
    }
  sup->tindex = j;

  if ((d->mb_cur_max == 1 && !have_achar) || !have_nchar)
    dfafree (sup);
  else
    d->superset = sup;
}

static fsatoken_token_t
hook_lexer (fsalex_ctxt_t *lexer_context)
{
  fsatoken_token_t temp_token;

  temp_token = fsalex_lex (lexer_context);
  fprintf(pll_log, "hook_lexer: token: %lx\n", temp_token);
  return temp_token;
}

/* Now do the lexing and parsing a SECOND time, this by re-priming the
   lexer with the same pattern, but then calling fsaparse_parse() instead
   dfaparse ().  The list of tokens (postfix order) output by both parsers
   should be identical (assuming that we know from the earler parallel-lex
   trial that the lexers were identical).  */

/* Parse and analyze a single string of the given length.  */
void
dfacomp (char const *s, size_t len, struct dfa *d, int searchflag)
{
  dfainit (d);
  dfambcache (d);
  dfaparse (s, len, d);
  dfamust (d);

  fsalex_pattern (lexer, s, len);
  fsaparse_lexer (parser, lexer,
                  (proto_lexparse_lex_fn_t *) hook_lexer,
                  (proto_lexparse_exchange_fn_t *) fsalex_exchange);
  fsaparse_parse (parser);

  /* YET ANOTHER HACK, 16 April 2014 (was it related to the lunar eclipse
     last night?? !!?? )
     Compare, side-by-side, the list of tokens generated by dfa.c and by
     fsaparse, and write these to the debug log file.  As elsewhere, these
     should be identical, as the modularised code starts as a functional
     clone of the original code.  (Later, if/when tokens are reworked to
     maintain abstractions at a higher level, the token lists will
     differ.)  */
  {
    size_t nr_tokens;
    fsatoken_token_t *token_list;
    size_t i;
    fsamusts_list_element_t *musts;

    fsaparse_get_token_list (parser, &nr_tokens, &token_list);
    fprintf (pll_log, "\ntokens:  original  fsaparse\n");
    for (i = 0; i < MAX (d->tindex, nr_tokens); ++i)
      {
        static char buf[256];
        if (i < d->tindex)
          {
            sprintf (buf, "%02lx", d->tokens[i]);
            fprintf (pll_log, "%17s ", buf);
          }
          else
            fprintf (pll_log, "%17s", "");
        if (i < nr_tokens)
          {
            sprintf (buf, "%02lx", token_list[i]);
            fprintf (pll_log, "%9s", buf);
          }
        fprintf (pll_log, "\n");
      }

    /* And finally, see how extracting musts from dfa.c compares to extracting
       musts via the fsa/charclass family of functions; again, these should
       be identical.  */
    musts = (fsamusts_list_element_t *) d->musts;
    show_musts ("original dfa.c", musts);

    /* ANOTHER UGLY HACK: Rely on dfa.c's case_fold and unibyte locale when
       instructing dfamust how to operate; an "Exchange" function might be
       more appropriate in the short-to-mid-term, but in the longer term,
       the token vocabluary should get more expressive, so that information
       can be conveyed directly.  */
    musts = fsamusts_must (NULL, nr_tokens, token_list,
                           /* dfa.c copy: */ case_fold,
                           /* current (dfa.c) locale: */ MB_CUR_MAX == 1);
    show_musts ("fsa* et al functions", musts);
  }

  dfaoptimize (d);
  dfasuperset (d);
  dfaanalyze (d, searchflag);
  if (d->superset)
    dfaanalyze (d->superset, searchflag);
}

/* Free the storage held by the components of a dfa.  */
void
dfafree (struct dfa *d)
{
  size_t i;
  struct dfamust *dm, *ndm;

  free (d->charclasses);
  free (d->tokens);

  if (d->mb_cur_max > 1)
    free_mbdata (d);

  for (i = 0; i < d->sindex; ++i)
    {
      free (d->states[i].elems.elems);
      free (d->states[i].mbps.elems);
    }
  free (d->states);

  if (d->follows)
    {
      for (i = 0; i < d->tindex; ++i)
        free (d->follows[i].elems);
      free (d->follows);
    }

  for (i = 0; i < d->tralloc; ++i)
    {
      free (d->trans[i]);
      free (d->fails[i]);
    }

  free (d->realtrans);
  free (d->fails);
  free (d->newlines);
  free (d->success);

  for (dm = d->musts; dm; dm = ndm)
    {
      ndm = dm->next;
      free (dm->must);
      free (dm);
    }

  if (d->superset)
    dfafree (d->superset);
}

/* Having found the postfix representation of the regular expression,
   try to find a long sequence of characters that must appear in any line
   containing the r.e.
   Finding a "longest" sequence is beyond the scope here;
   we take an easy way out and hope for the best.
   (Take "(ab|a)b"--please.)

   We do a bottom-up calculation of sequences of characters that must appear
   in matches of r.e.'s represented by trees rooted at the nodes of the postfix
   representation:
        sequences that must appear at the left of the match ("left")
        sequences that must appear at the right of the match ("right")
        lists of sequences that must appear somewhere in the match ("in")
        sequences that must constitute the match ("is")

   When we get to the root of the tree, we use one of the longest of its
   calculated "in" sequences as our answer.  The sequence we find is returned in
   d->must (where "d" is the single argument passed to "dfamust");
   the length of the sequence is returned in d->mustn.

   The sequences calculated for the various types of node (in pseudo ANSI c)
   are shown below.  "p" is the operand of unary operators (and the left-hand
   operand of binary operators); "q" is the right-hand operand of binary
   operators.

   "ZERO" means "a zero-length sequence" below.

        Type	left		right		is		in
        ----	----		-----		--		--
        char c	# c		# c		# c		# c

        ANYCHAR	ZERO		ZERO		ZERO		ZERO

        MBCSET	ZERO		ZERO		ZERO		ZERO

        CSET	ZERO		ZERO		ZERO		ZERO

        STAR	ZERO		ZERO		ZERO		ZERO

        QMARK	ZERO		ZERO		ZERO		ZERO

        PLUS	p->left		p->right	ZERO		p->in

        CAT	(p->is==ZERO)?	(q->is==ZERO)?	(p->is!=ZERO &&	p->in plus
                p->left :	q->right :	q->is!=ZERO) ?	q->in plus
                p->is##q->left	p->right##q->is	p->is##q->is :	p->right##q->left
                                                ZERO

        OR	longest common	longest common	(do p->is and	substrings common to
                leading		trailing	q->is have same	p->in and q->in
                (sub)sequence	(sub)sequence	length and
                of p->left	of p->right	content) ?
                and q->left	and q->right	p->is : NULL

   If there's anything else we recognize in the tree, all four sequences get set
   to zero-length sequences.  If there's something we don't recognize in the
   tree, we just return a zero-length sequence.

   Break ties in favor of infrequent letters (choosing 'zzz' in preference to
   'aaa')?

   And ... is it here or someplace that we might ponder "optimizations" such as
        egrep 'psi|epsilon'	->	egrep 'psi'
        egrep 'pepsi|epsilon'	->	egrep 'epsi'
                                        (Yes, we now find "epsi" as a "string
                                        that must occur", but we might also
                                        simplify the *entire* r.e. being sought)
        grep '[c]'		->	grep 'c'
        grep '(ab|a)b'		->	grep 'ab'
        grep 'ab*'		->	grep 'a'
        grep 'a*b'		->	grep 'b'

   There are several issues:

   Is optimization easy (enough)?

   Does optimization actually accomplish anything,
   or is the automaton you get from "psi|epsilon" (for example)
   the same as the one you get from "psi" (for example)?

   Are optimizable r.e.'s likely to be used in real-life situations
   (something like 'ab*' is probably unlikely; something like is
   'psi|epsilon' is likelier)?  */

static char *
icatalloc (char *old, char const *new)
{
  char *result;
  size_t oldsize = old == NULL ? 0 : strlen (old);
  size_t newsize = new == NULL ? 0 : strlen (new);
  if (newsize == 0)
    return old;
  result = xrealloc (old, oldsize + newsize + 1);
  memcpy (result + oldsize, new, newsize + 1);
  return result;
}

static char *
icpyalloc (char const *string)
{
  return icatalloc (NULL, string);
}

static char *_GL_ATTRIBUTE_PURE
istrstr (char const *lookin, char const *lookfor)
{
  char const *cp;
  size_t len;

  len = strlen (lookfor);
  for (cp = lookin; *cp != '\0'; ++cp)
    if (strncmp (cp, lookfor, len) == 0)
      return (char *) cp;
  return NULL;
}

static void
freelist (char **cpp)
{
  size_t i;

  if (cpp == NULL)
    return;
  for (i = 0; cpp[i] != NULL; ++i)
    {
      free (cpp[i]);
      cpp[i] = NULL;
    }
}

static char **
enlist (char **cpp, char *new, size_t len)
{
  size_t i, j;

  if (cpp == NULL)
    return NULL;
  if ((new = icpyalloc (new)) == NULL)
    {
      freelist (cpp);
      return NULL;
    }
  new[len] = '\0';
  /* Is there already something in the list that's new (or longer)?  */
  for (i = 0; cpp[i] != NULL; ++i)
    if (istrstr (cpp[i], new) != NULL)
      {
        free (new);
        return cpp;
      }
  /* Eliminate any obsoleted strings.  */
  j = 0;
  while (cpp[j] != NULL)
    if (istrstr (new, cpp[j]) == NULL)
      ++j;
    else
      {
        free (cpp[j]);
        if (--i == j)
          break;
        cpp[j] = cpp[i];
        cpp[i] = NULL;
      }
  /* Add the new string.  */
  REALLOC (cpp, i + 2);
  cpp[i] = new;
  cpp[i + 1] = NULL;
  return cpp;
}

/* Given pointers to two strings, return a pointer to an allocated
   list of their distinct common substrings.  Return NULL if something
   seems wild.  */
static char **
comsubs (char *left, char const *right)
{
  char **cpp;
  char *lcp;
  char *rcp;
  size_t i, len;

  if (left == NULL || right == NULL)
    return NULL;
  cpp = malloc (sizeof *cpp);
  if (cpp == NULL)
    return NULL;
  cpp[0] = NULL;
  for (lcp = left; *lcp != '\0'; ++lcp)
    {
      len = 0;
      rcp = strchr (right, *lcp);
      while (rcp != NULL)
        {
          for (i = 1; lcp[i] != '\0' && lcp[i] == rcp[i]; ++i)
            continue;
          if (i > len)
            len = i;
          rcp = strchr (rcp + 1, *lcp);
        }
      if (len == 0)
        continue;
      {
        char **p = enlist (cpp, lcp, len);
        if (p == NULL)
          {
            freelist (cpp);
            cpp = NULL;
            break;
          }
        cpp = p;
      }
    }
  return cpp;
}

static char **
addlists (char **old, char **new)
{
  size_t i;

  if (old == NULL || new == NULL)
    return NULL;
  for (i = 0; new[i] != NULL; ++i)
    {
      old = enlist (old, new[i], strlen (new[i]));
      if (old == NULL)
        break;
    }
  return old;
}

/* Given two lists of substrings, return a new list giving substrings
   common to both.  */
static char **
inboth (char **left, char **right)
{
  char **both;
  char **temp;
  size_t lnum, rnum;

  if (left == NULL || right == NULL)
    return NULL;
  both = malloc (sizeof *both);
  if (both == NULL)
    return NULL;
  both[0] = NULL;
  for (lnum = 0; left[lnum] != NULL; ++lnum)
    {
      for (rnum = 0; right[rnum] != NULL; ++rnum)
        {
          temp = comsubs (left[lnum], right[rnum]);
          if (temp == NULL)
            {
              freelist (both);
              return NULL;
            }
          both = addlists (both, temp);
          freelist (temp);
          free (temp);
          if (both == NULL)
            return NULL;
        }
    }
  return both;
}

typedef struct
{
  char **in;
  char *left;
  char *right;
  char *is;
} must;

static void
resetmust (must * mp)
{
  mp->left[0] = mp->right[0] = mp->is[0] = '\0';
  freelist (mp->in);
}

static void
dfamust (struct dfa *d)
{
  must *musts;
  must *mp;
  char *result;
  size_t ri;
  size_t i;
  bool exact;
  static must must0;
  struct dfamust *dm;
  static char empty_string[] = "";

  result = empty_string;
  exact = false;
  MALLOC (musts, d->tindex + 1);
  mp = musts;
  for (i = 0; i <= d->tindex; ++i)
    mp[i] = must0;
  for (i = 0; i <= d->tindex; ++i)
    {
      mp[i].in = xmalloc (sizeof *mp[i].in);
      mp[i].left = xmalloc (2);
      mp[i].right = xmalloc (2);
      mp[i].is = xmalloc (2);
      mp[i].left[0] = mp[i].right[0] = mp[i].is[0] = '\0';
      mp[i].in[0] = NULL;
    }
#ifdef DEBUG
  fprintf (stderr, "dfamust:\n");
  for (i = 0; i < d->tindex; ++i)
    {
      fprintf (stderr, " %zd:", i);
      prtok (d->tokens[i]);
    }
  putc ('\n', stderr);
#endif
  for (ri = 0; ri < d->tindex; ++ri)
    {
      token t = d->tokens[ri];
      switch (t)
        {
        case LPAREN:
        case RPAREN:
          assert (!"neither LPAREN nor RPAREN may appear here");

        case STAR:
        case QMARK:
          assert (musts < mp);
          --mp;
          /* Fall through.  */
        case EMPTY:
        case BEGLINE:
        case ENDLINE:
        case BEGWORD:
        case ENDWORD:
        case LIMWORD:
        case NOTLIMWORD:
        case BACKREF:
        case ANYCHAR:
        case MBCSET:
          resetmust (mp);
          break;

        case OR:
          assert (&musts[2] <= mp);
          {
            char **new;
            must *lmp;
            must *rmp;
            size_t j, ln, rn, n;

            rmp = --mp;
            lmp = --mp;
            /* Guaranteed to be.  Unlikely, but ...  */
            if (!STREQ (lmp->is, rmp->is))
              lmp->is[0] = '\0';
            /* Left side--easy */
            i = 0;
            while (lmp->left[i] != '\0' && lmp->left[i] == rmp->left[i])
              ++i;
            lmp->left[i] = '\0';
            /* Right side */
            ln = strlen (lmp->right);
            rn = strlen (rmp->right);
            n = ln;
            if (n > rn)
              n = rn;
            for (i = 0; i < n; ++i)
              if (lmp->right[ln - i - 1] != rmp->right[rn - i - 1])
                break;
            for (j = 0; j < i; ++j)
              lmp->right[j] = lmp->right[(ln - i) + j];
            lmp->right[j] = '\0';
            new = inboth (lmp->in, rmp->in);
            if (new == NULL)
              goto done;
            freelist (lmp->in);
            free (lmp->in);
            lmp->in = new;
          }
          break;

        case PLUS:
          assert (musts < mp);
          --mp;
          mp->is[0] = '\0';
          break;

        case END:
          assert (mp == &musts[1]);
          for (i = 0; musts[0].in[i] != NULL; ++i)
            if (strlen (musts[0].in[i]) > strlen (result))
              result = musts[0].in[i];
          if (STREQ (result, musts[0].is))
            exact = true;
          goto done;

        case CAT:
          assert (&musts[2] <= mp);
          {
            must *lmp;
            must *rmp;

            rmp = --mp;
            lmp = --mp;
            /* In.  Everything in left, plus everything in
               right, plus concatenation of
               left's right and right's left.  */
            lmp->in = addlists (lmp->in, rmp->in);
            if (lmp->in == NULL)
              goto done;
            if (lmp->right[0] != '\0' && rmp->left[0] != '\0')
              {
                char *tp;

                tp = icpyalloc (lmp->right);
                tp = icatalloc (tp, rmp->left);
                lmp->in = enlist (lmp->in, tp, strlen (tp));
                free (tp);
                if (lmp->in == NULL)
                  goto done;
              }
            /* Left-hand */
            if (lmp->is[0] != '\0')
              {
                lmp->left = icatalloc (lmp->left, rmp->left);
                if (lmp->left == NULL)
                  goto done;
              }
            /* Right-hand */
            if (rmp->is[0] == '\0')
              lmp->right[0] = '\0';
            lmp->right = icatalloc (lmp->right, rmp->right);
            if (lmp->right == NULL)
              goto done;
            /* Guaranteed to be */
            if (lmp->is[0] != '\0' && rmp->is[0] != '\0')
              {
                lmp->is = icatalloc (lmp->is, rmp->is);
                if (lmp->is == NULL)
                  goto done;
              }
            else
              lmp->is[0] = '\0';
          }
          break;

        case '\0':
          /* Not on *my* shift.  */
          goto done;

        default:
          resetmust (mp);
          if (CSET <= t)
            {
              /* If T is a singleton, or if case-folding in a unibyte
                 locale and T's members all case-fold to the same char,
                 convert T to one of its members.  Otherwise, do
                 nothing further with T.  */
              charclass *ccl = &d->charclasses[t - CSET];
              int j;
              for (j = 0; j < NOTCHAR; j++)
                if (tstbit (j, *ccl))
                  break;
              if (! (j < NOTCHAR))
                break;
              t = j;
              while (++j < NOTCHAR)
                if (tstbit (j, *ccl)
                    && ! (case_fold && MB_CUR_MAX == 1
                          && toupper (j) == toupper (t)))
                  break;
              if (j < NOTCHAR)
                break;
            }
          mp->is[0] = mp->left[0] = mp->right[0]
            = case_fold && MB_CUR_MAX == 1 ? toupper (t) : t;
          mp->is[1] = mp->left[1] = mp->right[1] = '\0';
          mp->in = enlist (mp->in, mp->is, 1);
          if (mp->in == NULL)
            goto done;
          break;
        }
#ifdef DEBUG
      fprintf (stderr, " node: %zd:", ri);
      prtok (d->tokens[ri]);
      fprintf (stderr, "\n  in:");
      for (i = 0; mp->in[i]; ++i)
        fprintf (stderr, " \"%s\"", mp->in[i]);
      fprintf (stderr, "\n  is: \"%s\"\n", mp->is);
      fprintf (stderr, "  left: \"%s\"\n", mp->left);
      fprintf (stderr, "  right: \"%s\"\n", mp->right);
#endif
      ++mp;
    }
done:
  if (strlen (result))
    {
      MALLOC (dm, 1);
      dm->exact = exact;
      dm->must = xmemdup (result, strlen (result) + 1);
      dm->next = d->musts;
      d->musts = dm;
    }
  mp = musts;
  for (i = 0; i <= d->tindex; ++i)
    {
      freelist (mp[i].in);
      free (mp[i].in);
      free (mp[i].left);
      free (mp[i].right);
      free (mp[i].is);
    }
  free (mp);
}

struct dfa *
dfaalloc (void)
{
  return xmalloc (sizeof (struct dfa));
}

struct dfamust *_GL_ATTRIBUTE_PURE
dfamusts (struct dfa const *d)
{
  return d->musts;
}

/* vim:set shiftwidth=2: */

--------------060207090305090503050301
Content-Type: text/x-chdr;
 name="fsalex.h"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="fsalex.h"

/* fsalex - Repackage pattern text as compact, expressive tokens

   Copyright (C) 1988, 1998, 2000, 2002, 2004-2005, 2007-2014 Free Software
   Foundation, Inc.

   This program is free software; you can redistribute it and/or modify
   it under the terms of the GNU General Public License as published by
   the Free Software Foundation; either version 3, or (at your option)
   any later version.

   This program is distributed in the hope that it will be useful,
   but WITHOUT ANY WARRANTY; without even the implied warranty of
   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
   GNU General Public License for more details.

   You should have received a copy of the GNU General Public License
   along with this program; if not, write to the Free Software
   Foundation, Inc.,
   51 Franklin Street - Fifth Floor, Boston, MA  02110-1301, USA */

/* Written June, 1988 by Mike Haertel
   Modified July, 1988 by Arthur David Olson to assist BMG speedups  */

/* 2014: Repackaged by "untangle" script, written by behoffski.  */


#ifndef FSALEX_H
#define FSALEX_H 1

/* Always import environment-specific configuration items first.  */
#include <config.h>

#include "fsatoken.h"
#include "proto-lexparse.h"
#include <regex.h>

/* Multiple lexer instances can exist in parallel, so define an opaque
   type to collect together all the context relating to each instance.  */
typedef struct fsalex_ctxt_struct fsalex_ctxt_t;

/* Generate a new instance of an FSA lexer.  */
extern fsalex_ctxt_t *
fsalex_new (void);

/* Receive the pattern, and reset the lexical analyser state.  The
   interpretation of the chars (octets?) in the pattern (ASCII chars?
   variable-length UTF-8 sequences?  Simplified Chinese?  etc.) depends on
   the locale that was in force when fsalex_syntax () was called.  NULs may
   be present amongst the codes, which is why the length is given
   explicitly, rather than relying on strlen(3).  */
extern void
fsalex_pattern (fsalex_ctxt_t *lexer,
                char const *pattern, size_t const pattern_len);

/* Receive syntax directives, and other pattern interpretation
   instructions such as case folding and end-of-line character.
   In addition, this function configures various internal structures
   based on the locale in force.  */
extern void
fsalex_syntax (fsalex_ctxt_t *lexer,
               reg_syntax_t bits, int fold, unsigned char eol);

/* Define function prototypes for warning and error callbacks.  */
typedef void
fsalex_warn_callback_fn (const char *);
typedef void /* ?? _Noreturn? */
fsalex_error_callback_fn (const char *);

/* Receive functions to deal with exceptions detected by the lexer:
   Warnings and errors.  Internally, we add the _Noreturn attribute
   to the error callback, to help the compiler with code flow
   analysis.  */
extern void
fsalex_exception_fns (fsalex_ctxt_t *lexer,
                      fsalex_warn_callback_fn *warningfn,
                      fsalex_error_callback_fn *errorfn);

/* Main function to incrementally consume and interpret the pattern text,
   and return a token describing a single lexical element as a token,
   perhaps with implied parameters such as character classes for CSET
   tokens, and {min,max} values for each REPMN token.  The user should
   call this function repeatedly, receiving one token each time, until
   the lexer detects a fatal error, or returns the END token.  */
/* This function must conform to proto_lexparse_lex_fn_t.  */
extern fsatoken_token_t
fsalex_lex (fsalex_ctxt_t *lexer);

/* Define external function to do non-core data exchanges.
   This function must conform to proto_lexparse_exchange_fn_t.  */
extern int
fsalex_exchange (fsalex_ctxt_t *lexer,
                 proto_lexparse_opcode_t opcode,
                 void *param);

/* Maximum number of characters that can be the case-folded
   counterparts of a single character, not counting the character
   itself.  This is 1 for towupper, 1 for towlower, and 1 for each
   entry in LONESOME_LOWER; see fsalex.c.  */
enum { FSALEX_CASE_FOLDED_BUFSIZE = 1 + 1 + 19 };

extern int fsalex_case_folded_counterparts (fsalex_ctxt_t *lexer,
                            wchar_t,
                            wchar_t[FSALEX_CASE_FOLDED_BUFSIZE]);

#endif /* FSALEX_H */

/* vim:set shiftwidth=2: */

--------------060207090305090503050301
Content-Type: text/x-csrc;
 name="fsalex.c"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="fsalex.c"

/* fsalex - Repackage pattern text as compact, expressive tokens

   Copyright (C) 1988, 1998, 2000, 2002, 2004-2005, 2007-2014 Free Software
   Foundation, Inc.

   This program is free software; you can redistribute it and/or modify
   it under the terms of the GNU General Public License as published by
   the Free Software Foundation; either version 3, or (at your option)
   any later version.

   This program is distributed in the hope that it will be useful,
   but WITHOUT ANY WARRANTY; without even the implied warranty of
   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
   GNU General Public License for more details.

   You should have received a copy of the GNU General Public License
   along with this program; if not, write to the Free Software
   Foundation, Inc.,
   51 Franklin Street - Fifth Floor, Boston, MA  02110-1301, USA */

/* Written June, 1988 by Mike Haertel
   Modified July, 1988 by Arthur David Olson to assist BMG speedups  */

/* 2014: Repackaged by "untangle" script, written by behoffski.  */

/* Always import environment-specific configuration items first.  */
#include <config.h>    /* define _GNU_SOURCE for regex extensions.  */

#include <assert.h>
#include "charclass.h"
#include <ctype.h>
#include "fsalex.h"
#include "fsatoken.h"
#include <limits.h>
#include <locale.h>
#include "proto-lexparse.h"
#include <regex.h>
#include <stdbool.h>
#include <stddef.h>
#include <stdlib.h>
#include <string.h>
#include <wctype.h>
#include "xalloc.h"

/* gettext.h ensures that we don't use gettext if ENABLE_NLS is not defined */
#include "gettext.h"
#define _(str) gettext (str)

/* ISASCIIDIGIT differs from isdigit, as follows:
   - Its arg may be any int or unsigned int; it need not be an unsigned char.
   - It's guaranteed to evaluate its argument exactly once.
   - It's typically faster.
   Posix 1003.2-1992 section 2.5.2.1 page 50 lines 1556-1558 says that
   only '0' through '9' are digits.  Prefer ISASCIIDIGIT to isdigit unless
   it's important to use the locale's definition of "digit" even when the
   host does not conform to Posix.  */
#define ISASCIIDIGIT(c) ((unsigned) (c) - '0' <= 9)

#define STREQ(a, b) (strcmp (a, b) == 0)

#ifndef MIN
# define MIN(a,b) ((a) < (b) ? (a) : (b))
#endif

/* Reallocate an array of type *P if N_ALLOC is <= N_REQUIRED.  */
#define REALLOC_IF_NECESSARY(p, n_alloc, n_required)		\
  do								\
    {								\
      if ((n_alloc) <= (n_required))				\
        {							\
          size_t new_n_alloc = (n_required) + !(p);		\
          (p) = x2nrealloc (p, &new_n_alloc, sizeof (*(p)));	\
          (n_alloc) = new_n_alloc;				\
        }							\
    }								\
  while (0)

/* The following list maps the names of the Posix named character classes
   to predicate functions that determine whether a given character is in
   the class.  The leading [ has already been eaten by the lexical
   analyzer.  Additional objects are provided to assist the client:
   wchar_desc for multibyte matching, and class for octet matching.
   Lazy evaluation and caching are used to minimise processing costs, so
   these additional items are only valid after a class has been located using
   find_pred ().  */
typedef int predicate_t (wint_t, wctype_t);
typedef struct predicate_entry_struct
{
  const char *name;
  wctype_t wchar_desc;
  charclass_t *class;
} predicate_entry_t;

/* This list is a template, copied into each lexer's state, and interrogated
   and updated from there.  The membership of a class can vary due to locale
   and other settings, so each lexer must maintain its own list.  Duplicate
   class sharing across different lexer instances is facilitated by checks
   in charclass_finalise.  */
/* Locale portability note: We use isalpha_l () etc. functions, with the
   descriptor initialised when fsalex_syntax is called.  */
static predicate_entry_t template_predicate_list[] = {
  {"alpha",  0, NULL},
  {"alnum",  0, NULL},
  {"blank",  0, NULL},
  {"cntrl",  0, NULL},
  {"digit",  0, NULL},
  {"graph",  0, NULL},
  {"lower",  0, NULL},
  {"print",  0, NULL},
  {"punct",  0, NULL},
  {"space",  0, NULL},
  {"upper",  0, NULL},
  {"xdigit", 0, NULL},
  {NULL, 0, NULL}
};

#define PREDICATE_TEMPLATE_ITEMS \
    (sizeof template_predicate_list / sizeof *template_predicate_list)

/* Multibyte character-class storage.  Unibyte classes are handled in full
   by a comination of charclass and the CSET token with a class index
   parameter.  */
/* A bracket operator.
   e.g., [a-c], [[:alpha:]], etc.  */
struct mb_char_classes
{
  ptrdiff_t cset;
  bool invert;
  wchar_t *chars;               /* Normal characters.  */
  size_t nchars;
  wctype_t *ch_classes;         /* Character classes.  */
  size_t nch_classes;
  wchar_t *range_sts;           /* Range characters (start of the range).  */
  wchar_t *range_ends;          /* Range characters (end of the range).  */
  size_t nranges;
  char **equivs;                /* Equivalence classes.  */
  size_t nequivs;
  char **coll_elems;
  size_t ncoll_elems;           /* Collating elements.  */
};

/* Flesh out the opaque instance context type given in the header.  */
struct fsalex_ctxt_struct
{
  /* Using the lexer without setting the syntax is a fatal error, so use a
     flag so we can report such errors in a direct fashion.  */
  bool syntax_initialised;

  /* Exception handling is done by explicit callbacks.  */
  fsalex_warn_callback_fn *warn_client;
  fsalex_error_callback_fn *abandon_with_error;

  /* Pattern pointer/length, updated as pattern is consumed.  */
  char const *lexptr;
  size_t lexleft;

  /* Syntax flags/characters directing how to interpret the pattern.  */
  /* ?? Note: We no longer have a flag here to indicate "syntax_bits_set",
     as was used in dfa.c.  We may want to reintroduce this.  */
  reg_syntax_t syntax_bits;
  bool case_fold;
  unsigned char eolbyte;

  /* Break out some regex syntax bits into boolean vars.  Do this for the
     ones that are heavily used, and/or where the nature of the bitmask flag
     test tends to clutter the lexer code.  */
  bool re_gnu_ops;               /* GNU regex operators are allowed.  */

  /* Carry dotclass here, as it's easier for clients (utf8) to perform
     class operations with this class, rather than to know intimate details
     of the regex syntax configuration bits and items such as eolbyte.  */
  charclass_t *dotclass;         /* ".": All chars except eolbyte and/or
                                        NUL, depending on syntax flags.  */
  charclass_index_t dotclass_index;

  /* Work variables to help organise lexer operation.  */
  fsatoken_token_t lasttok;
  bool laststart;
  size_t parens;

  /* Character class predicate mapping/caching table.  */
  predicate_entry_t predicates[PREDICATE_TEMPLATE_ITEMS];

  /* Minrep and maxrep are actually associated with the REPMN token, and
     need to be accessible outside this module (by the parser), perhaps
     by an explicit interface call.  In the far, far future, a
     completely-reworked token list may see these values properly become
     integrated into the token stream (perhaps by a pair of "Parameter"
     tokens?  Perhaps by a MINREP token with 1 parameter, followed by a
     MAXREP token with a corresponding parameter?)  */
  int minrep, maxrep;

  /* Booleans to simplify unibyte/multibyte code selection paths.  */
  bool unibyte_locale;
  bool multibyte_locale;

  /* REVIEWME: Wide-character support variables.  */
  int cur_mb_len;       /* Length (in bytes) of the last character
                           fetched; this is needed when backing up during
                           lexing.  In a non-multibyte situation (locale?),
                           this variable remains at 1; otherwise, it is
                           updated as required by FETCH_WC.  */

  /* These variables are used only if in a multibyte locale.  */
  wchar_t wctok;         /* Storage for a single multibyte character, used
                           both during lexing, and as the implied parameter
                           of a WCHAR token returned by the lexer.  */
  mbstate_t mbrtowc_state; /* State management area for mbrtowc to use. */

  /* A table indexed by byte values that contains the corresponding wide
     character (if any) for that byte.  WEOF means the byte is the
     leading byte of a multibyte character.  Invalid and null bytes are
     mapped to themselves.  */
  wint_t mbrtowc_cache[FSATOKEN_NOTCHAR];

  /* Array of multibyte bracket expressions.  */
  struct mb_char_classes *mbcsets;
  size_t nmbcsets;
  size_t mbcsets_alloc;

};

/* Set a bit in the charclass for the given wchar_t.  Do nothing if WC
   is represented by a multi-byte sequence.  Even in unibyte locales,
   this may happen when folding case in weird Turkish locales where
   dotless i/dotted I are not included in the chosen character set.
   Return whether a bit was set in the charclass.  */
static bool
setbit_wc (wint_t wc, charclass_t *c)
{
  int b = wctob (wc);
  if (b == EOF)
    return false;

  charclass_setbit (b, c);
  return true;
}

/* Set a bit for B and its case variants in the charclass C.
  We must be in an unibyte locale.  */
static void
setbit_case_fold_c (int b, charclass_t *c)
{
  int ub = toupper (b);
  int i;
  for (i = 0; i < FSATOKEN_NOTCHAR; i++)
    if (toupper (i) == ub)
      charclass_setbit (i, c);
}

/* Convert a possibly-signed character to an unsigned character.  This is
   a bit safer than casting to unsigned char, since it catches some type
   errors that the cast doesn't.  */
static unsigned char
to_uchar (char ch)
{
  return ch;
}

static void
mb_uchar_cache (fsalex_ctxt_t *lexer)
{
  int i;
  for (i = CHAR_MIN; i <= CHAR_MAX; ++i)
    {
      char c = i;
      unsigned char uc = i;
      mbstate_t s = { 0 };
      wchar_t wc;
      wint_t wi;
      switch (mbrtowc (&wc, &c, 1, &s))
        {
        default: wi = wc; break;
        case (size_t) -2: wi = WEOF; break;
        case (size_t) -1: wi = uc; break;
        }
      lexer->mbrtowc_cache[uc] = wi;
    }
}

/* This function is intimately connected with multibyte (wide-char) handling
   in the macro FETCH_WC below, in the case where FETCH_SINGLE_CHAR has run
   but the result has been found to be inconclusive.  It works by unwinding
   the FETCH_SINGLE_CHAR side-effects (lexptr/lexleft), then calling
   mbrtowc on the pattern space, and communicates mbrtowc's understanding
   of the octet stream back to the caller:
     - If a valid multibyte octet sequence is next, then the wide character
       associated with this sequence is written back to *p_wchar, and the
       number of octets consumed is returned; or
     - If the sequence is invalid for any reason, the mbrtowc working state
       is reset (zeroed), *p_wchar is not modified, and 1 is returned.
   Lexer state variables, including cur_mb_len, mbs, lexleft and lexptr, are
   updated as appropriate by this function (mainly if mbrtowc succeeds).
   The wide char NUL is unusual as it is a 1-octet sequence, the length
   returned is 0, we report it as length 1, but write the converted wide
   character in temp_wchar to the caller.  */
/* ?? This code, in partnership with the macro FETCH_WC, is closely related
   to mbs_to_wchar in dfa.c.  There is documentation there (e.g. pattern
   must end in a sentinel, shift encodings not supported, plus other
   comments/guarantees) that is important, but I'm deferring writing anything
   up at present until I see how this code is received.  */
static size_t
fetch_offset_wide_char (fsalex_ctxt_t *lexer, wchar_t *p_wchar)
{
  size_t nbytes;
  wchar_t temp_wchar;

  nbytes = mbrtowc (&temp_wchar,
                    lexer->lexptr - 1, lexer->lexleft + 1,
                    &lexer->mbrtowc_state);
  switch (nbytes)
    {
    case (size_t) -2:
    case (size_t) -1:
      /* Conversion failed: Incomplete (-2) or invalid (-1) sequence.  */
      memset (&lexer->mbrtowc_state, 0, sizeof (lexer->mbrtowc_state));
      return 1;

    case (size_t) 0:
      /* This is the wide NUL character, actually 1 byte long. */
      nbytes = 1;
      break;

    default:
      /* Converted character is in temp_wchar, and nbytes is a byte count.  */
      break;
    }
  /* We converted 1 or more bytes, tell result to caller.  */
  *p_wchar = temp_wchar;

  /* Update the number of bytes consumed (offset by 1 since
     FETCH_SINGLE_CHAR grabbed one earlier).  */
  lexer->lexptr  += nbytes - 1;
  lexer->lexleft -= nbytes - 1;

  return nbytes;
}

/* Single-character input fetch, with EOF/error handling.  Note that
   characters become unsigned here.  If no characters are available,
   the macro either returns END or reports an error, depending on
   eoferr.  Otherwise, one character is consumed (lexptr/lexleft),
   the char is converted into an unsigned char, and is written into
   the parameter c.  */
#define FETCH_SINGLE_CHAR(lexer, c, eoferr)                  \
  do {                                                       \
    if (! (lexer)->lexleft)                                  \
      {                                                      \
        if ((eoferr) != 0)                                   \
          (lexer)->abandon_with_error (eoferr);              \
        else                                                 \
          return FSATOKEN_TK_END;                            \
      }                                                      \
    (c) = to_uchar (*(lexer)->lexptr++);                     \
    (lexer)->lexleft--;                                      \
  } while (0)

/* Do the fetch in stages: Single char, octet+multibyte cache check,
   and possible wide char fetch if the cache result indicates that the
   input sequence is longer than a single octet.  The first fetch handles
   end-of-input cases (if this happens, control never reaches the rest of
   the macro); otherwise, it returns temp_uchar which is used in the cache
   lookup, and may be the single-octet result.  A cache result of WEOF
   means that the octet is not a complete sequence by itself, so a second
   fetch tweaks lexptr/lexleft to undo the single-char-fetch side-effects,
   and, depending on mbrtowc valid/invalid result, propagates either the
   multichar fetch or the single-char fetch back to the caller.  */
# define FETCH_WC(lexer, c, wc, eoferr)                      \
  do {                                                       \
    wchar_t temp_wc;                                         \
    unsigned char temp_uchar;                                \
    (lexer)->cur_mb_len = 1;                                 \
    FETCH_SINGLE_CHAR ((lexer), temp_uchar, (eoferr));       \
    temp_wc = (lexer)->mbrtowc_cache[temp_uchar];            \
    if (temp_wc != WEOF)                                     \
      {                                                      \
        (c)  = temp_uchar;                                   \
        (wc) = temp_wc;                                      \
      }                                                      \
    else                                                     \
      {                                                      \
        size_t nbytes;                                       \
        temp_wc = temp_uchar;                                \
        nbytes = fetch_offset_wide_char ((lexer), &temp_wc); \
        (wc) = temp_wc;                                      \
        (c) = nbytes == 1 ? temp_uchar : EOF;                \
        (lexer)->cur_mb_len = nbytes;                        \
      }                                                      \
  } while (0)

/* Given a predicate name, find it in a list, and report the list entry
   to the caller.  If the name is not recognised, the function returns NULL.
   The list entry includes a charclass set and (if relevant) a wide-char
   descriptor for testing for the predicate.  Lazy evaluation and caching
   are used to keep processing costs down.  */
static predicate_entry_t *
find_pred (fsalex_ctxt_t *lexer, const char *str)
{
  predicate_entry_t *p_entry;
  charclass_t *work_class;

  for (p_entry = lexer->predicates; p_entry->name; p_entry++)
    {
      if (STREQ (str, p_entry->name))
        break;
    }

  /* If there was no matching predicate name found, return NULL.  */
  if (! p_entry->name)
    return NULL;

  /* Is the charclass pointer NULL for this entry? */
  if (p_entry->class == NULL)
    {
      /* Yes, allocate, set up and cache a charclass for this predicate.  Note
         that the wchar_desc entries were set up in fsalex_syntax ().  */
      int i;
      charclass_index_t index;
      wctype_t wctype_desc;

      wctype_desc = p_entry->wchar_desc;
      work_class = charclass_alloc ();
      for (i = 0; i < FSATOKEN_NOTCHAR; i++)
        {
          wchar_t wc;

          /* Try integer->unsigned char->wide char using lexer's mbrtowc_cache
             array, and, if successful, test for class membership, and set the
             bit in the class if the value is a member.  */
          wc = lexer->mbrtowc_cache[i];
          if (iswctype (wc, wctype_desc))
            charclass_setbit (i, work_class);
        }

      /* Finalise the class, and obtain a persistent class pointer.  */
      index = charclass_finalise (work_class);
      p_entry->class = charclass_get_pointer (index);

    }

  /* Return predicate entry to the caller.  */
  return p_entry;
}

/* Return true if the current locale is known to be a unibyte locale
   without multicharacter collating sequences and where range
   comparisons simply use the native encoding.  These locales can be
   processed more efficiently.  */

static bool
using_simple_locale (fsalex_ctxt_t *lexer)
{
  /* True if the native character set is known to be compatible with
     the C locale.  The following test isn't perfect, but it's good
     enough in practice, as only ASCII and EBCDIC are in common use
     and this test correctly accepts ASCII and rejects EBCDIC.  */
  enum { native_c_charset =
    ('\b' == 8 && '\t' == 9 && '\n' == 10 && '\v' == 11 && '\f' == 12
     && '\r' == 13 && ' ' == 32 && '!' == 33 && '"' == 34 && '#' == 35
     && '%' == 37 && '&' == 38 && '\'' == 39 && '(' == 40 && ')' == 41
     && '*' == 42 && '+' == 43 && ',' == 44 && '-' == 45 && '.' == 46
     && '/' == 47 && '0' == 48 && '9' == 57 && ':' == 58 && ';' == 59
     && '<' == 60 && '=' == 61 && '>' == 62 && '?' == 63 && 'A' == 65
     && 'Z' == 90 && '[' == 91 && '\\' == 92 && ']' == 93 && '^' == 94
     && '_' == 95 && 'a' == 97 && 'z' == 122 && '{' == 123 && '|' == 124
     && '}' == 125 && '~' == 126)
  };

  if (! native_c_charset || lexer->multibyte_locale)
    return false;
  else
    {
      static int unibyte_c = -1;
      if (unibyte_c < 0)
        {
          char const *locale = setlocale (LC_ALL, NULL);
          unibyte_c = (!locale
                       || STREQ (locale, "C")
                       || STREQ (locale, "POSIX"));
        }
      return unibyte_c;
    }
}

/* Multibyte character handling sub-routine for lex.
   Parse a bracket expression and build a struct mb_char_classes.  */
static fsatoken_token_t
parse_bracket_exp (fsalex_ctxt_t *lexer)
{
  bool invert;
  int c, c1, c2;
  charclass_t *ccl;

  /* True if this is a bracket expression that dfaexec is known to
     process correctly.  */
  bool known_bracket_exp = true;

  /* Used to warn about [:space:].
     Bit 0 = first character is a colon.
     Bit 1 = last character is a colon.
     Bit 2 = includes any other character but a colon.
     Bit 3 = includes ranges, char/equiv classes or collation elements.  */
  int colon_warning_state;

  wint_t wc;
  wint_t wc2;
  wint_t wc1 = 0;

  /* Work area to build a mb_char_classes.  */
  struct mb_char_classes *work_mbc;
  size_t chars_al, range_sts_al, range_ends_al, ch_classes_al,
    equivs_al, coll_elems_al;

  chars_al = 0;
  range_sts_al = range_ends_al = 0;
  ch_classes_al = equivs_al = coll_elems_al = 0;
  if (lexer->multibyte_locale)
    {
      REALLOC_IF_NECESSARY (lexer->mbcsets, lexer->mbcsets_alloc,
                            lexer->nmbcsets + 1);

      /* Initialize work area.  */
      work_mbc = &(lexer->mbcsets[lexer->nmbcsets++]);
      memset (work_mbc, 0, sizeof *work_mbc);
    }
  else
    work_mbc = NULL;

  ccl = charclass_alloc ();
  FETCH_WC (lexer, c, wc, _("unbalanced ["));
  if (c == '^')
    {
      FETCH_WC (lexer, c, wc, _("unbalanced ["));
      invert = true;
      known_bracket_exp = using_simple_locale (lexer);
    }
  else
    invert = false;

  colon_warning_state = (c == ':');
  do
    {
      c1 = EOF;                 /* mark c1 is not initialized".  */
      colon_warning_state &= ~2;

      /* Note that if we're looking at some other [:...:] construct,
         we just treat it as a bunch of ordinary characters.  We can do
         this because we assume regex has checked for syntax errors before
         dfa is ever called.  */
      if (c == '[')
        {
#define MAX_BRACKET_STRING_LEN 32
          char str[MAX_BRACKET_STRING_LEN + 1];
          FETCH_WC (lexer, c1, wc1, _("unbalanced ["));

          if ((c1 == ':' && (lexer->syntax_bits & RE_CHAR_CLASSES))
              || c1 == '.' || c1 == '=')
            {
              size_t len = 0;
              for (;;)
                {
                  FETCH_WC (lexer, c, wc, _("unbalanced ["));
                  if ((c == c1 && *lexer->lexptr == ']')
                          || lexer->lexleft == 0)
                    break;
                  if (len < MAX_BRACKET_STRING_LEN)
                    str[len++] = c;
                  else
                    /* This is in any case an invalid class name.  */
                    str[0] = '\0';
                }
              str[len] = '\0';

              /* Fetch bracket.  */
              FETCH_WC (lexer, c, wc, _("unbalanced ["));
              if (c1 == ':')
                /* Find and merge named character class.  POSIX allows
                   character classes to match multicharacter collating
                   elements, but the regex code does not support that,
                   so do not worry about that possibility.  */
                {
                  char const *class;
                  predicate_entry_t *pred;

                  class = str;
                  if (lexer->case_fold && (STREQ (class, "upper")
                                      || STREQ (class, "lower")))
                    class = "alpha";
                  pred = find_pred (lexer, class);
                  if (! pred)
                    lexer->abandon_with_error (_("invalid character class"));
                  charclass_unionset (pred->class, ccl);

                  /* Does this class have a wide-char type descriptor? */
                  if (lexer->multibyte_locale && pred->wchar_desc)
                    {
                      /* Yes, add it to work multibyte-class-desc list.  */
                      REALLOC_IF_NECESSARY (work_mbc->ch_classes,
                                            ch_classes_al,
                                            work_mbc->nch_classes + 1);
                      work_mbc->ch_classes[work_mbc->nch_classes++]
                                 = pred->wchar_desc;
                    }
                }
              else
                known_bracket_exp = false;

              colon_warning_state |= 8;

              /* Fetch new lookahead character.  */
              FETCH_WC (lexer, c1, wc1, _("unbalanced ["));
              continue;
            }

          /* We treat '[' as a normal character here.  c/c1/wc/wc1
             are already set up.  */
        }

      if (c == '\\' && (lexer->syntax_bits & RE_BACKSLASH_ESCAPE_IN_LISTS))
        FETCH_WC (lexer, c, wc, _("unbalanced ["));

      if (c1 == EOF)
        FETCH_WC (lexer, c1, wc1, _("unbalanced ["));

      if (c1 == '-')
        /* build range characters.  */
        {
          FETCH_WC (lexer, c2, wc2, _("unbalanced ["));

          /* A bracket expression like [a-[.aa.]] matches an unknown set.
             Treat it like [-a[.aa.]] while parsing it, and
             remember that the set is unknown.  */
          if (c2 == '[' && *lexer->lexptr == '.')
            {
              known_bracket_exp = false;
              c2 = ']';
            }

          if (c2 != ']')
            {
              if (c2 == '\\' && (lexer->syntax_bits & RE_BACKSLASH_ESCAPE_IN_LISTS))
                FETCH_WC (lexer, c2, wc2, _("unbalanced ["));

              if (lexer->multibyte_locale)
                {
                  /* When case folding map a range, say [m-z] (or even [M-z])
                     to the pair of ranges, [m-z] [M-Z].  Although this code
                     is wrong in multiple ways, it's never used in practice.
                     FIXME: Remove this (and related) unused code.  */
                  REALLOC_IF_NECESSARY (work_mbc->range_sts,
                                        range_sts_al, work_mbc->nranges + 1);
                  REALLOC_IF_NECESSARY (work_mbc->range_ends,
                                        range_ends_al, work_mbc->nranges + 1);
                  work_mbc->range_sts[work_mbc->nranges] =
                    lexer->case_fold ? towlower (wc) : (wchar_t) wc;
                  work_mbc->range_ends[work_mbc->nranges++] =
                    lexer->case_fold ? towlower (wc2) : (wchar_t) wc2;

                  if (lexer->case_fold && (iswalpha (wc) || iswalpha (wc2)))
                    {
                      REALLOC_IF_NECESSARY (work_mbc->range_sts,
                                            range_sts_al, work_mbc->nranges + 1);
                      work_mbc->range_sts[work_mbc->nranges] = towupper (wc);
                      REALLOC_IF_NECESSARY (work_mbc->range_ends,
                                            range_ends_al, work_mbc->nranges + 1);
                      work_mbc->range_ends[work_mbc->nranges++] = towupper (wc2);
                    }
                }
              else if (using_simple_locale (lexer))
                {
                  for (c1 = c; c1 <= c2; c1++)
                    charclass_setbit (c1, ccl);
                  if (lexer->case_fold)
                    {
                      int uc = toupper (c);
                      int uc2 = toupper (c2);
                      for (c1 = 0; c1 < FSATOKEN_NOTCHAR; c1++)
                        {
                          int uc1 = toupper (c1);
                          if (uc <= uc1 && uc1 <= uc2)
                            charclass_setbit (c1, ccl);
                        }
                    }
                }
              else
                known_bracket_exp = false;

              colon_warning_state |= 8;
              FETCH_WC (lexer, c1, wc1, _("unbalanced ["));
              continue;
            }

          /* In the case [x-], the - is an ordinary hyphen,
             which is left in c1, the lookahead character.  */
          lexer->lexptr  -= lexer->cur_mb_len;
          lexer->lexleft += lexer->cur_mb_len;
        }

      colon_warning_state |= (c == ':') ? 2 : 4;

      if (lexer->unibyte_locale)
        {
          if (lexer->case_fold)
            setbit_case_fold_c (c, ccl);
          else
            charclass_setbit (c, ccl);
          continue;
        }

      if (lexer->case_fold)
        {
          wchar_t folded[FSALEX_CASE_FOLDED_BUFSIZE];
          int i, n = fsalex_case_folded_counterparts (lexer, wc, folded);
          REALLOC_IF_NECESSARY (work_mbc->chars, chars_al,
                                work_mbc->nchars + n);
          for (i = 0; i < n; i++)
            if (!setbit_wc (folded[i], ccl))
              work_mbc->chars[work_mbc->nchars++] = folded[i];
        }
      if (!setbit_wc (wc, ccl))
        {
          REALLOC_IF_NECESSARY (work_mbc->chars, chars_al,
                                work_mbc->nchars + 1);
          work_mbc->chars[work_mbc->nchars++] = wc;
        }
    }
  while ((wc = wc1, (c = c1) != ']'));

  if (colon_warning_state == 7)
    lexer->warn_client (_("character class syntax is [[:space:]], not [:space:]"));

  if (! known_bracket_exp)
    return FSATOKEN_TK_BACKREF;

  if (lexer->multibyte_locale)
    {
      charclass_t *zeroclass = charclass_get_pointer (0);
      work_mbc->invert = invert;
      work_mbc->cset = charclass_equal (ccl, zeroclass)
                              ? -1 : charclass_finalise (ccl);
      return FSATOKEN_TK_MBCSET;
    }

  if (invert)
    {
      assert (lexer->unibyte_locale);
      charclass_notset (ccl);
      if (lexer->syntax_bits & RE_HAT_LISTS_NOT_NEWLINE)
        charclass_clrbit (lexer->eolbyte, ccl);
    }

  return FSATOKEN_TK_CSET + charclass_finalise (ccl);
}

/* The set of wchar_t values C such that there's a useful locale
   somewhere where C != towupper (C) && C != towlower (towupper (C)).
   For example, 0x00B5 (U+00B5 MICRO SIGN) is in this table, because
   towupper (0x00B5) == 0x039C (U+039C GREEK CAPITAL LETTER MU), and
   towlower (0x039C) == 0x03BC (U+03BC GREEK SMALL LETTER MU).  */
static short const lonesome_lower[] =
  {
    0x00B5, 0x0131, 0x017F, 0x01C5, 0x01C8, 0x01CB, 0x01F2, 0x0345,
    0x03C2, 0x03D0, 0x03D1, 0x03D5, 0x03D6, 0x03F0, 0x03F1,

    /* U+03F2 GREEK LUNATE SIGMA SYMBOL lacks a specific uppercase
       counterpart in locales predating Unicode 4.0.0 (April 2003).  */
    0x03F2,

    0x03F5, 0x1E9B, 0x1FBE,
  };

int fsalex_case_folded_counterparts (fsalex_ctxt_t *lexer,
                            wchar_t c,
                            wchar_t folded[FSALEX_CASE_FOLDED_BUFSIZE])
{
  int i;
  int n = 0;

  /* Exit quickly if there's nothing to be done.  This test was previously
     found on the client side (e.g. fsaparse), but has been moved here as
     we want to keep internals hidden, if it's not too costly.  */
  if (! lexer->case_fold)
    return 0;

  wint_t uc = towupper (c);
  wint_t lc = towlower (uc);
  if (uc != c)
    folded[n++] = uc;
  if (lc != uc && lc != c && towupper (lc) == uc)
    folded[n++] = lc;
  for (i = 0; i < sizeof lonesome_lower / sizeof *lonesome_lower; i++)
    {
      wint_t li = lonesome_lower[i];
      if (li != lc && li != uc && li != c && towupper (li) == uc)
        folded[n++] = li;
    }
  return n;
}

fsatoken_token_t
fsalex_lex (fsalex_ctxt_t *lexer)
{
  unsigned int c;
  bool backslash = false;
  int i;
  predicate_entry_t *predicate;
  charclass_t *work_class;

  /* Ensure that syntax () has been called on this lexer instance; many
     things will fail if this isn't done.  */
  assert (lexer->syntax_initialised);

  /* Basic plan: We fetch a character.  If it's a backslash,
     we set the backslash flag and go through the loop again.
     On the plus side, this avoids having a duplicate of the
     main switch inside the backslash case.  On the minus side,
     it means that just about every case begins with
     "if (backslash) ...".  */
  for (i = 0; i < 2; ++i)
    {
      FETCH_WC (lexer, c, lexer->wctok, NULL);
      if (c == (unsigned int) EOF)
        goto normal_char;

      switch (c)
        {
        case '\\':
          if (backslash)
            goto normal_char;
          if (lexer->lexleft == 0)
            lexer->abandon_with_error (_("unfinished \\ escape"));
          backslash = true;
          break;

        case '^':
          if (backslash)
            goto normal_char;
          if (lexer->syntax_bits & RE_CONTEXT_INDEP_ANCHORS
              || lexer->lasttok == FSATOKEN_TK_END || lexer->lasttok == FSATOKEN_TK_LPAREN || lexer->lasttok == FSATOKEN_TK_OR)
            return lexer->lasttok = FSATOKEN_TK_BEGLINE;
          goto normal_char;

        case '$':
          if (backslash)
            goto normal_char;
          if (lexer->syntax_bits & RE_CONTEXT_INDEP_ANCHORS
              || lexer->lexleft == 0
              || (lexer->syntax_bits & RE_NO_BK_PARENS
                  ? lexer->lexleft > 0 && *lexer->lexptr == ')'
                  : lexer->lexleft > 1 && lexer->lexptr[0] == '\\' && lexer->lexptr[1] == ')')
              || (lexer->syntax_bits & RE_NO_BK_VBAR
                  ? lexer->lexleft > 0 && *lexer->lexptr == '|'
                  : lexer->lexleft > 1 && lexer->lexptr[0] == '\\' && lexer->lexptr[1] == '|')
              || ((lexer->syntax_bits & RE_NEWLINE_ALT)
                  && lexer->lexleft > 0 && *lexer->lexptr == '\n'))
            return lexer->lasttok = FSATOKEN_TK_ENDLINE;
          goto normal_char;

        case '1':
        case '2':
        case '3':
        case '4':
        case '5':
        case '6':
        case '7':
        case '8':
        case '9':
          if (backslash && !(lexer->syntax_bits & RE_NO_BK_REFS))
            {
              lexer->laststart = false;
              return lexer->lasttok = FSATOKEN_TK_BACKREF;
            }
          goto normal_char;

        case '`':
          if (backslash && lexer->re_gnu_ops)
            return lexer->lasttok = FSATOKEN_TK_BEGLINE; /* FIXME: should be beginning of string */
          goto normal_char;

        case '\'':
          if (backslash && lexer->re_gnu_ops)
            return lexer->lasttok = FSATOKEN_TK_ENDLINE;   /* FIXME: should be end of string */
          goto normal_char;

        case '<':
          if (backslash && lexer->re_gnu_ops)
            return lexer->lasttok = FSATOKEN_TK_BEGWORD;
          goto normal_char;

        case '>':
          if (backslash && lexer->re_gnu_ops)
            return lexer->lasttok = FSATOKEN_TK_ENDWORD;
          goto normal_char;

        case 'b':
          if (backslash && lexer->re_gnu_ops)
            return lexer->lasttok = FSATOKEN_TK_LIMWORD;
          goto normal_char;

        case 'B':
          if (backslash && lexer->re_gnu_ops)
            return lexer->lasttok = FSATOKEN_TK_NOTLIMWORD;
          goto normal_char;

        case '?':
          if (lexer->syntax_bits & RE_LIMITED_OPS)
            goto normal_char;
          if (backslash != ((lexer->syntax_bits & RE_BK_PLUS_QM) != 0))
            goto normal_char;
          if (!(lexer->syntax_bits & RE_CONTEXT_INDEP_OPS) && lexer->laststart)
            goto normal_char;
          return lexer->lasttok = FSATOKEN_TK_QMARK;

        case '*':
          if (backslash)
            goto normal_char;
          if (!(lexer->syntax_bits & RE_CONTEXT_INDEP_OPS) && lexer->laststart)
            goto normal_char;
          return lexer->lasttok = FSATOKEN_TK_STAR;

        case '+':
          if (lexer->syntax_bits & RE_LIMITED_OPS)
            goto normal_char;
          if (backslash != ((lexer->syntax_bits & RE_BK_PLUS_QM) != 0))
            goto normal_char;
          if (!(lexer->syntax_bits & RE_CONTEXT_INDEP_OPS) && lexer->laststart)
            goto normal_char;
          return lexer->lasttok = FSATOKEN_TK_PLUS;

        case '{':
          if (!(lexer->syntax_bits & RE_INTERVALS))
            goto normal_char;
          if (backslash != ((lexer->syntax_bits & RE_NO_BK_BRACES) == 0))
            goto normal_char;
          if (!(lexer->syntax_bits & RE_CONTEXT_INDEP_OPS) && lexer->laststart)
            goto normal_char;

          /* Cases:
             {M} - exact count
             {M,} - minimum count, maximum is infinity
             {,N} - 0 through N
             {,} - 0 to infinity (same as '*')
             {M,N} - M through N */
          {
            char const *p = lexer->lexptr;
            char const *lim = p + lexer->lexleft;
            int minrep = -1;
            int maxrep = -1;
            for (; p != lim && ISASCIIDIGIT (*p); p++)
              {
                if (minrep < 0)
                  minrep = *p - '0';
                else
                  minrep = MIN (RE_DUP_MAX + 1, minrep * 10 + *p - '0');
              }
            if (p != lim)
              {
                if (*p != ',')
                  maxrep = minrep;
                else
                  {
                    if (minrep < 0)
                      minrep = 0;
                    while (++p != lim && ISASCIIDIGIT (*p))
                      {
                        if (maxrep < 0)
                          maxrep = *p - '0';
                        else
                          maxrep = MIN (RE_DUP_MAX + 1, maxrep * 10 + *p - '0');
                      }
                  }
              }
            if (! ((! backslash || (p != lim && *p++ == '\\'))
                   && p != lim && *p++ == '}'
                   && 0 <= minrep && (maxrep < 0 || minrep <= maxrep)))
              {
                if (lexer->syntax_bits & RE_INVALID_INTERVAL_ORD)
                  goto normal_char;
                lexer->abandon_with_error (_("Invalid content of \\{\\}"));
              }
            if (RE_DUP_MAX < maxrep)
              lexer->abandon_with_error (_("Regular expression too big"));
            lexer->lexptr = p;
            lexer->lexleft = lim - p;
            lexer->minrep = minrep;
            lexer->maxrep = maxrep;
          }
          lexer->laststart = false;
          return lexer->lasttok = FSATOKEN_TK_REPMN;

        case '|':
          if (lexer->syntax_bits & RE_LIMITED_OPS)
            goto normal_char;
          if (backslash != ((lexer->syntax_bits & RE_NO_BK_VBAR) == 0))
            goto normal_char;
          lexer->laststart = true;
          return lexer->lasttok = FSATOKEN_TK_OR;

        case '\n':
          if (lexer->syntax_bits & RE_LIMITED_OPS
              || backslash || !(lexer->syntax_bits & RE_NEWLINE_ALT))
            goto normal_char;
          lexer->laststart = true;
          return lexer->lasttok = FSATOKEN_TK_OR;

        case '(':
          if (backslash != ((lexer->syntax_bits & RE_NO_BK_PARENS) == 0))
            goto normal_char;
          ++lexer->parens;
          lexer->laststart = true;
          return lexer->lasttok = FSATOKEN_TK_LPAREN;

        case ')':
          if (backslash != ((lexer->syntax_bits & RE_NO_BK_PARENS) == 0))
            goto normal_char;
          if (lexer->parens == 0 && lexer->syntax_bits & RE_UNMATCHED_RIGHT_PAREN_ORD)
            goto normal_char;
          --lexer->parens;
          lexer->laststart = false;
          return lexer->lasttok = FSATOKEN_TK_RPAREN;

        case '.':
          if (backslash)
            goto normal_char;
          lexer->laststart = false;
          if (lexer->multibyte_locale)
            {
              /* In multibyte environment period must match with a single
                 character not a byte.  So we use FSATOKEN_TK_ANYCHAR.  */
              return lexer->lasttok = FSATOKEN_TK_ANYCHAR;
            }
          return lexer->lasttok = FSATOKEN_TK_CSET + lexer->dotclass_index;

        case 's':
        case 'S':
           /* Can mean "[[:space:]]" (\s) or its inverse (\S).  */
          if (! (backslash && lexer->re_gnu_ops))
            goto normal_char;
          lexer->laststart = false;
          if (lexer->unibyte_locale)
            {
             predicate = find_pred (lexer, "space");
             if (c == 's')
               return FSATOKEN_TK_CSET
                         + charclass_get_index (predicate->class);
             work_class = charclass_alloc ();
             charclass_copyset (predicate->class, work_class);
             charclass_notset (work_class);
             return FSATOKEN_TK_CSET
                       + charclass_finalise (work_class);
            }

#define PUSH_LEX_STATE(s)                       \
  do                                            \
    {                                           \
      char const *lexptr_saved = lexer->lexptr; \
      size_t lexleft_saved = lexer->lexleft;    \
      lexer->lexptr = (s);                      \
      lexer->lexleft = strlen (lexer->lexptr)

#define POP_LEX_STATE()                         \
      lexer->lexptr = lexptr_saved;             \
      lexer->lexleft = lexleft_saved;           \
    }                                           \
  while (0)

          /* FIXME: see if optimizing this, as is done with FSATOKEN_TK_ANYCHAR and
             add_utf8_anychar, makes sense.  */

          /* \s and \S are documented to be equivalent to [[:space:]] and
             [^[:space:]] respectively, so tell the lexer to process those
             strings, each minus its "already processed" '['.  */
          PUSH_LEX_STATE (c == 's' ? "[:space:]]" : "^[:space:]]");

          lexer->lasttok = parse_bracket_exp (lexer);

          POP_LEX_STATE ();

          lexer->laststart = false;
          return lexer->lasttok;

         case 'w':
         case 'W':
           /* Can mean "[_[:alnum:]]" (\w) or its inverse (\W).  */
           if (! (backslash && lexer->re_gnu_ops))
             goto normal_char;
           lexer->laststart = false;
           predicate = find_pred (lexer, "alnum");
           work_class = charclass_alloc ();
           charclass_copyset (predicate->class, work_class);
           charclass_setbit ('_', work_class);
           if (c == 'w')
             return FSATOKEN_TK_CSET
                       + charclass_finalise (work_class);
           charclass_notset (work_class);
           return FSATOKEN_TK_CSET
                       + charclass_finalise (work_class);

        case '[':
          if (backslash)
            goto normal_char;
          lexer->laststart = false;
          return lexer->lasttok = parse_bracket_exp (lexer);

        default:
        normal_char:
          lexer->laststart = false;
          /* For multibyte character sets, folding is done in atom.  Always
             return FSATOKEN_TK_WCHAR.  */
          if (lexer->multibyte_locale)
            return lexer->lasttok = FSATOKEN_TK_WCHAR;

          if (lexer->case_fold && isalpha (c))
            {
              charclass_t *ccl = charclass_alloc ();
              setbit_case_fold_c (c, ccl);
              return lexer->lasttok = FSATOKEN_TK_CSET
                          + charclass_finalise (ccl);
            }

          return lexer->lasttok = c;
        }
    }

  /* The above loop should consume at most a backslash
     and some other character.  */
  abort ();
  return FSATOKEN_TK_END;                   /* keeps pedantic compilers happy.  */
}

/* Receive a pattern string and to reset the lexer state.  */
void
fsalex_pattern (fsalex_ctxt_t *lexer,
                char const *pattern, size_t const pattern_len)
{
  /* Copy parameters to internal state variables.  */
  lexer->lexptr = pattern;
  lexer->lexleft = pattern_len;

  /* Reset lexical scanner state.  */
  lexer->lasttok = FSATOKEN_TK_END;
  lexer->laststart = 1;
  lexer->parens = 0;

  /* Reset multibyte parsing state. */
  lexer->cur_mb_len = 1;
  memset(&lexer->mbrtowc_state, 0, sizeof (lexer->mbrtowc_state));
}
/* Receive syntax directives, and other pattern interpretation
   instructions such as case folding and end-of-line character.  */
void
fsalex_syntax (fsalex_ctxt_t *lexer,
               reg_syntax_t bits, int fold, unsigned char eol)
{
  charclass_t *work_class;
  predicate_entry_t *pred;

  /* Set a flag noting that this lexer has had its syntax params set.  */
  lexer->syntax_initialised = true;

  /* Record the function parameters in our local context.  */
  lexer->syntax_bits = bits;
  lexer->case_fold = fold;
  lexer->eolbyte = eol;

  /* Set up unibyte/multibyte flags, based on MB_CUR_MAX, which depends on
     the current locale.  We capture this information here as the locale
     may change later.  At present, we don't capture MB_CUR_MAX itself.  */
  if (MB_CUR_MAX > 1)
    {
      /* Multibyte locale: Prepare booleans to make code easier to read */
      lexer->unibyte_locale = false;
      lexer->multibyte_locale = true;

      /* Set up an array of structures to hold multibyte character sets.  */
      lexer->nmbcsets = 0;
      lexer->mbcsets_alloc = 2;
      lexer->mbcsets = xzalloc (sizeof (*lexer->mbcsets)
                                      * lexer->mbcsets_alloc);
    }
    else
    {
      /* Unibyte locale: Prepare booleans to make code easier to read */
      lexer->unibyte_locale = true;
      lexer->multibyte_locale = false;
    }

  /* Charclass guarantees that class index 0 is zeroclass, so we don't need
     to set it up here.  */

 /* Set up a character class to match anychar ('.'), tailored to
    accommodate options from the regex syntax.  */
  work_class = charclass_alloc ();
  charclass_notset (work_class);
  if (! (lexer->syntax_bits & RE_DOT_NEWLINE))
    {
      charclass_clrbit (lexer->eolbyte, work_class);
    }
  if (lexer->syntax_bits & RE_DOT_NOT_NULL)
    {
      charclass_clrbit (0, work_class);
    }
  lexer->dotclass_index = charclass_finalise (work_class);
  lexer->dotclass = charclass_get_pointer (lexer->dotclass_index);

  /* Testing for the absence of RE_NO_GNU_OPS in syntax_bits happens often,
     so set a direct flag variable:  This makes code more readable.  */
  lexer->re_gnu_ops = ! (lexer->syntax_bits & RE_NO_GNU_OPS);

  /* Initialise cache and other tables that have syntax and/or locale
     influences.  */

  /* Set up the wchar_desc fields of the predicate table.  */
  for (pred = lexer->predicates; pred->name != NULL; pred++)
    pred->wchar_desc = wctype (pred->name);

  /* Add special treatment for class "digit", as it is *always* a single
     octet?  This was done in the past by the "single_byte_only" field in
     the predicate list, and we could bring that treatment back in here if
     we wished, with the following code:  */
#if 0
  /* Search for "digit" predicate, initialise it by hand, and, by setting
     its wchar_desc field to 0, mark it as an always-unibyte class.  */
  for (pred = lexer->predicates; pred->name != NULL; pred++)
    if (STREQ(pred->name, "digit"))
      {
        int i;
        charclass_t *isdigit_work_class;
        charclass_index_t work_index;

        isdigit_work_class = charclass_alloc ();
        for (i = 0; i < FSATOKEN_NOTCHAR; i++)
          if (isdigit (i))
            charclass_setbit (i, isdigit_work_class);
        work_index = charclass_finalise (isdigit_work_class);
        pred->class = charclass_get_pointer (work_index);
        pred->wchar_desc = 0;
        break;
      }
#endif /* 0 */

  /* Initialise first-octet cache so multibyte code dealing with
     single-octet codes can avoid the slow function mbrtowc.  */
  mb_uchar_cache (lexer);
}

/* Receive functions to deal with exceptions detected by the lexer:
   Warnings and errors.  Internally, we add the _Noreturn attribute
   to the error callback, to help the compiler with code flow
   analysis.  */
void
fsalex_exception_fns (fsalex_ctxt_t *lexer,
                      fsalex_warn_callback_fn *warningfn,
                      fsalex_error_callback_fn *errorfn)
{
  /* Record the provided functions in the lexer's context.  */
  lexer->warn_client        = warningfn;
  lexer->abandon_with_error = errorfn;
}

/* Define external function to do non-core data exchanges.
   This function must conform to proto_lexparse_exchange_fn_t.  */
int
fsalex_exchange (fsalex_ctxt_t *lexer,
                 proto_lexparse_opcode_t opcode,
                 void *param)
{
  switch (opcode)
    {
    case PROTO_LEXPARSE_OP_GET_IS_MULTIBYTE_ENV:
      return (int) lexer->multibyte_locale;
    case PROTO_LEXPARSE_OP_GET_REPMN_MIN:
      return lexer->minrep;
    case PROTO_LEXPARSE_OP_GET_REPMN_MAX:
      return lexer->maxrep;
    case PROTO_LEXPARSE_OP_GET_WIDE_CHAR:
      *((wchar_t *) param) = lexer->wctok;
      break;
    case PROTO_LEXPARSE_OP_GET_DOTCLASS:
      *((charclass_t **) param) = lexer->dotclass;
      break;
    default:
      /* ?? Not sure if we should complain/assert or merely ignore an opcode
         that we don't recognise here.  */
      break;
    }

    /* If we reach here, return value is unimportant, so just say 0.  */
    return 0;
}

/* Add "not provided!" stub function that gets called if the client
   fails to provide proper resources.  This is a hack, merely to get the
   module started; better treatment needs to be added later.  */
static void
no_function_provided(void *unused)
{
 assert (!"fsalex: Plug-in function required, but not provided.");
}

/* Generate a new instance of an FSA lexer.  */
fsalex_ctxt_t *
fsalex_new (void)
{
  fsalex_ctxt_t *new_context;

  /* Acquire zeroed memory for new lexer context.  */
  new_context = XZALLOC (fsalex_ctxt_t);

  /* ?? Point warning and error functions to a "you need to tell me
     these first!" function? */
  new_context->warn_client        = (fsalex_warn_callback_fn *)
                                    no_function_provided;
  new_context->abandon_with_error = (fsalex_error_callback_fn *)
                                    no_function_provided;

  /* Default to working in a non-multibyte locale.  In some cases, FETCH_WC
     never sets this variable (as it's assumed to be 1), so fulfil this
     expectation here.  */
  new_context->cur_mb_len = 1;

  /* Copy the template predicate list into this context, so that we can
     have lexer-specific named predicate classes.  */
  memcpy (new_context->predicates, template_predicate_list,
         sizeof (new_context->predicates));

  /* Default to unibyte locale at first; the final locale setting is made
     according to what's in force when fsalex_syntax () is called.  */
  new_context->unibyte_locale = true;
  new_context->multibyte_locale = false;

  /* Many things depend on decisions made in fsalex_syntax (), so note here
     that it hasn't been called yet, and fail gracefully later if the client
     hasn't called the function before commencing work.  */
  new_context->syntax_initialised = false;

  return new_context;
}
/* vim:set shiftwidth=2: */

--------------060207090305090503050301
Content-Type: text/x-csrc;
 name="fsamusts.c"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="fsamusts.c"

/* fsamusts -- Report a list of must-have simple strings in the pattern

   Copyright (C) 1988, 1998, 2000, 2002, 2004-2005, 2007-2014 Free Software
   Foundation, Inc.

   This program is free software; you can redistribute it and/or modify
   it under the terms of the GNU General Public License as published by
   the Free Software Foundation; either version 3, or (at your option)
   any later version.

   This program is distributed in the hope that it will be useful,
   but WITHOUT ANY WARRANTY; without even the implied warranty of
   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
   GNU General Public License for more details.

   You should have received a copy of the GNU General Public License
   along with this program; if not, write to the Free Software
   Foundation, Inc.,
   51 Franklin Street - Fifth Floor, Boston, MA  02110-1301, USA */

/* Written June, 1988 by Mike Haertel
   Modified July, 1988 by Arthur David Olson to assist BMG speedups  */

/* 2014: Repackaged by "untangle" script, written by behoffski.  */

/* (?? Long description/discussion goes here...) */

/* Always import environment-specific configuration items first.  */
#include <config.h>

#include <assert.h>
#include "charclass.h"
#include <ctype.h>
#include "fsamusts.h"
#include "fsatoken.h"
#include <stdbool.h>
#include <stddef.h>
#include <stdlib.h>
#include <string.h>
#include "xalloc.h"

#if DEBUG
#include <stdio.h>
#endif /* DEBUG */

/* XNMALLOC defined here is identical to the ones in gnulib's xalloc.h,
   except that it does not cast the result to "(t *)", and thus may
   be used via type-free MALLOC macros.  Note that we've left out
   XCALLOC here as this module does not use it.  */
#undef XNMALLOC
/* Allocate memory for N elements of type T, with error checking.  */
/* extern t *XNMALLOC (size_t n, typename t); */
# define XNMALLOC(n, t) \
    (sizeof (t) == 1 ? xmalloc (n) : xnmalloc (n, sizeof (t)))

#define MALLOC(p, n) do { (p) = XNMALLOC (n, *(p)); } while (0)

#define REALLOC(p, n) do {(p) = xnrealloc (p, n, sizeof (*(p))); } while (0)

#define STREQ(a, b) (strcmp (a, b) == 0)

/* Having found the postfix representation of the regular expression,
   try to find a long sequence of characters that must appear in any line
   containing the r.e.
   Finding a "longest" sequence is beyond the scope here;
   we take an easy way out and hope for the best.
   (Take "(ab|a)b"--please.)

   We do a bottom-up calculation of sequences of characters that must appear
   in matches of r.e.'s represented by trees rooted at the nodes of the postfix
   representation:
        sequences that must appear at the left of the match ("left")
        sequences that must appear at the right of the match ("right")
        lists of sequences that must appear somewhere in the match ("in")
        sequences that must constitute the match ("is")

   When we get to the root of the tree, we use one of the longest of its
   calculated "in" sequences as our answer.  The sequence we find is returned in
   d->must (where "d" is the single argument passed to "dfamust");
   the length of the sequence is returned in d->mustn.

   The sequences calculated for the various types of node (in pseudo ANSI c)
   are shown below.  "p" is the operand of unary operators (and the left-hand
   operand of binary operators); "q" is the right-hand operand of binary
   operators.

   "ZERO" means "a zero-length sequence" below.

        Type	left		right		is		in
        ----	----		-----		--		--
        char c	# c		# c		# c		# c

        ANYCHAR	ZERO		ZERO		ZERO		ZERO

        MBCSET	ZERO		ZERO		ZERO		ZERO

        CSET	ZERO		ZERO		ZERO		ZERO

        STAR	ZERO		ZERO		ZERO		ZERO

        QMARK	ZERO		ZERO		ZERO		ZERO

        PLUS	p->left		p->right	ZERO		p->in

        CAT	(p->is==ZERO)?	(q->is==ZERO)?	(p->is!=ZERO &&	p->in plus
                p->left :	q->right :	q->is!=ZERO) ?	q->in plus
                p->is##q->left	p->right##q->is	p->is##q->is :	p->right##q->left
                                                ZERO

        OR	longest common	longest common	(do p->is and	substrings common to
                leading		trailing	q->is have same	p->in and q->in
                (sub)sequence	(sub)sequence	length and
                of p->left	of p->right	content) ?
                and q->left	and q->right	p->is : NULL

   If there's anything else we recognize in the tree, all four sequences get set
   to zero-length sequences.  If there's something we don't recognize in the
   tree, we just return a zero-length sequence.

   Break ties in favor of infrequent letters (choosing 'zzz' in preference to
   'aaa')?

   And ... is it here or someplace that we might ponder "optimizations" such as
        egrep 'psi|epsilon'	->	egrep 'psi'
        egrep 'pepsi|epsilon'	->	egrep 'epsi'
                                        (Yes, we now find "epsi" as a "string
                                        that must occur", but we might also
                                        simplify the *entire* r.e. being sought)
        grep '[c]'		->	grep 'c'
        grep '(ab|a)b'		->	grep 'ab'
        grep 'ab*'		->	grep 'a'
        grep 'a*b'		->	grep 'b'

   There are several issues:

   Is optimization easy (enough)?

   Does optimization actually accomplish anything,
   or is the automaton you get from "psi|epsilon" (for example)
   the same as the one you get from "psi" (for example)?

   Are optimizable r.e.'s likely to be used in real-life situations
   (something like 'ab*' is probably unlikely; something like is
   'psi|epsilon' is likelier)?  */

static char *
icatalloc (char *old, char const *new)
{
  char *result;
  size_t oldsize = old == NULL ? 0 : strlen (old);
  size_t newsize = new == NULL ? 0 : strlen (new);
  if (newsize == 0)
    return old;
  result = xrealloc (old, oldsize + newsize + 1);
  memcpy (result + oldsize, new, newsize + 1);
  return result;
}

static char *
icpyalloc (char const *string)
{
  return icatalloc (NULL, string);
}

static char *_GL_ATTRIBUTE_PURE
istrstr (char const *lookin, char const *lookfor)
{
  char const *cp;
  size_t len;

  len = strlen (lookfor);
  for (cp = lookin; *cp != '\0'; ++cp)
    if (strncmp (cp, lookfor, len) == 0)
      return (char *) cp;
  return NULL;
}

static void
freelist (char **cpp)
{
  size_t i;

  if (cpp == NULL)
    return;
  for (i = 0; cpp[i] != NULL; ++i)
    {
      free (cpp[i]);
      cpp[i] = NULL;
    }
}

static char **
enlist (char **cpp, char *new, size_t len)
{
  size_t i, j;

  if (cpp == NULL)
    return NULL;
  if ((new = icpyalloc (new)) == NULL)
    {
      freelist (cpp);
      return NULL;
    }
  new[len] = '\0';
  /* Is there already something in the list that's new (or longer)?  */
  for (i = 0; cpp[i] != NULL; ++i)
    if (istrstr (cpp[i], new) != NULL)
      {
        free (new);
        return cpp;
      }
  /* Eliminate any obsoleted strings.  */
  j = 0;
  while (cpp[j] != NULL)
    if (istrstr (new, cpp[j]) == NULL)
      ++j;
    else
      {
        free (cpp[j]);
        if (--i == j)
          break;
        cpp[j] = cpp[i];
        cpp[i] = NULL;
      }
  /* Add the new string.  */
  REALLOC (cpp, i + 2);
  cpp[i] = new;
  cpp[i + 1] = NULL;
  return cpp;
}

/* Given pointers to two strings, return a pointer to an allocated
   list of their distinct common substrings.  Return NULL if something
   seems wild.  */
static char **
comsubs (char *left, char const *right)
{
  char **cpp;
  char *lcp;
  char *rcp;
  size_t i, len;

  if (left == NULL || right == NULL)
    return NULL;
  cpp = malloc (sizeof *cpp);
  if (cpp == NULL)
    return NULL;
  cpp[0] = NULL;
  for (lcp = left; *lcp != '\0'; ++lcp)
    {
      len = 0;
      rcp = strchr (right, *lcp);
      while (rcp != NULL)
        {
          for (i = 1; lcp[i] != '\0' && lcp[i] == rcp[i]; ++i)
            continue;
          if (i > len)
            len = i;
          rcp = strchr (rcp + 1, *lcp);
        }
      if (len == 0)
        continue;
      {
        char **p = enlist (cpp, lcp, len);
        if (p == NULL)
          {
            freelist (cpp);
            cpp = NULL;
            break;
          }
        cpp = p;
      }
    }
  return cpp;
}

static char **
addlists (char **old, char **new)
{
  size_t i;

  if (old == NULL || new == NULL)
    return NULL;
  for (i = 0; new[i] != NULL; ++i)
    {
      old = enlist (old, new[i], strlen (new[i]));
      if (old == NULL)
        break;
    }
  return old;
}

typedef struct
{
  char **in;
  char *left;
  char *right;
  char *is;
} must;

/* Given two lists of substrings, return a new list giving substrings
   common to both.  */
static char **
inboth (char **left, char **right)
{
  char **both;
  char **temp;
  size_t lnum, rnum;

  if (left == NULL || right == NULL)
    return NULL;
  both = malloc (sizeof *both);
  if (both == NULL)
    return NULL;
  both[0] = NULL;
  for (lnum = 0; left[lnum] != NULL; ++lnum)
    {
      for (rnum = 0; right[rnum] != NULL; ++rnum)
        {
          temp = comsubs (left[lnum], right[rnum]);
          if (temp == NULL)
            {
              freelist (both);
              return NULL;
            }
          both = addlists (both, temp);
          freelist (temp);
          free (temp);
          if (both == NULL)
            return NULL;
        }
    }
  return both;
}

static void
resetmust (must * mp)
{
  mp->left[0] = mp->right[0] = mp->is[0] = '\0';
  freelist (mp->in);
}

/* Receive an existing list (possibly empty) of must-have strings, together
   with a list of the tokens for the current FSA (postfix tree order), and
   if there are any more must-have strings in the token list, add them to
   the must-have list.  Returns the possibly-modified list to the caller.
   Locale and syntax items are partially covered here by the case_fold and
   unibyte_locale flags, but this is incomplete, and should be addressed by
   Stage 2 (improving the expressiveness of tokens).  */
fsamusts_list_element_t *
fsamusts_must (fsamusts_list_element_t *must_list,
              size_t nr_tokens, fsatoken_token_t *token_list,
              bool case_fold, bool unibyte_locale)
{
  must *musts;
  must *mp;
  char *result;
  size_t ri;
  size_t i;
  bool exact;
  static must must0;
  static char empty_string[] = "";

  result = empty_string;
  exact = false;
  MALLOC (musts, nr_tokens + 1);
  mp = musts;
  for (i = 0; i <= nr_tokens; ++i)
    mp[i] = must0;
  for (i = 0; i <= nr_tokens; ++i)
    {
      mp[i].in = xmalloc (sizeof *mp[i].in);
      mp[i].left = xmalloc (2);
      mp[i].right = xmalloc (2);
      mp[i].is = xmalloc (2);
      mp[i].left[0] = mp[i].right[0] = mp[i].is[0] = '\0';
      mp[i].in[0] = NULL;
    }
#ifdef DEBUG
  fprintf (stderr, "dfamust:\n");
  for (i = 0; i < nr_tokens; ++i)
    {
      fprintf (stderr, " %zd:", i);
      fsatoken_prtok (token_list[i]);
    }
  putc ('\n', stderr);
#endif
  for (ri = 0; ri < nr_tokens; ++ri)
    {
      fsatoken_token_t t = token_list[ri];
      switch (t)
        {
        case FSATOKEN_TK_LPAREN:
        case FSATOKEN_TK_RPAREN:
          assert (!"neither FSATOKEN_TK_LPAREN nor FSATOKEN_TK_RPAREN may appear here");

        case FSATOKEN_TK_STAR:
        case FSATOKEN_TK_QMARK:
          assert (musts < mp);
          --mp;
          /* Fall through.  */
        case FSATOKEN_TK_EMPTY:
        case FSATOKEN_TK_BEGLINE:
        case FSATOKEN_TK_ENDLINE:
        case FSATOKEN_TK_BEGWORD:
        case FSATOKEN_TK_ENDWORD:
        case FSATOKEN_TK_LIMWORD:
        case FSATOKEN_TK_NOTLIMWORD:
        case FSATOKEN_TK_BACKREF:
        case FSATOKEN_TK_ANYCHAR:
        case FSATOKEN_TK_MBCSET:
          resetmust (mp);
          break;

        case FSATOKEN_TK_OR:
          assert (&musts[2] <= mp);
          {
            char **new;
            must *lmp;
            must *rmp;
            size_t j, ln, rn, n;

            rmp = --mp;
            lmp = --mp;
            /* Guaranteed to be.  Unlikely, but ...  */
            if (!STREQ (lmp->is, rmp->is))
              lmp->is[0] = '\0';
            /* Left side--easy */
            i = 0;
            while (lmp->left[i] != '\0' && lmp->left[i] == rmp->left[i])
              ++i;
            lmp->left[i] = '\0';
            /* Right side */
            ln = strlen (lmp->right);
            rn = strlen (rmp->right);
            n = ln;
            if (n > rn)
              n = rn;
            for (i = 0; i < n; ++i)
              if (lmp->right[ln - i - 1] != rmp->right[rn - i - 1])
                break;
            for (j = 0; j < i; ++j)
              lmp->right[j] = lmp->right[(ln - i) + j];
            lmp->right[j] = '\0';
            new = inboth (lmp->in, rmp->in);
            if (new == NULL)
              goto done;
            freelist (lmp->in);
            free (lmp->in);
            lmp->in = new;
          }
          break;

        case FSATOKEN_TK_PLUS:
          assert (musts < mp);
          --mp;
          mp->is[0] = '\0';
          break;

        case FSATOKEN_TK_END:
          assert (mp == &musts[1]);
          for (i = 0; musts[0].in[i] != NULL; ++i)
            if (strlen (musts[0].in[i]) > strlen (result))
              result = musts[0].in[i];
          if (STREQ (result, musts[0].is))
            exact = true;
          goto done;

        case FSATOKEN_TK_CAT:
          assert (&musts[2] <= mp);
          {
            must *lmp;
            must *rmp;

            rmp = --mp;
            lmp = --mp;
            /* In.  Everything in left, plus everything in
               right, plus concatenation of
               left's right and right's left.  */
            lmp->in = addlists (lmp->in, rmp->in);
            if (lmp->in == NULL)
              goto done;
            if (lmp->right[0] != '\0' && rmp->left[0] != '\0')
              {
                char *tp;

                tp = icpyalloc (lmp->right);
                tp = icatalloc (tp, rmp->left);
                lmp->in = enlist (lmp->in, tp, strlen (tp));
                free (tp);
                if (lmp->in == NULL)
                  goto done;
              }
            /* Left-hand */
            if (lmp->is[0] != '\0')
              {
                lmp->left = icatalloc (lmp->left, rmp->left);
                if (lmp->left == NULL)
                  goto done;
              }
            /* Right-hand */
            if (rmp->is[0] == '\0')
              lmp->right[0] = '\0';
            lmp->right = icatalloc (lmp->right, rmp->right);
            if (lmp->right == NULL)
              goto done;
            /* Guaranteed to be */
            if (lmp->is[0] != '\0' && rmp->is[0] != '\0')
              {
                lmp->is = icatalloc (lmp->is, rmp->is);
                if (lmp->is == NULL)
                  goto done;
              }
            else
              lmp->is[0] = '\0';
          }
          break;

        case '\0':
          /* Not on *my* shift.  */
          goto done;

        default:
          resetmust (mp);
          if (FSATOKEN_TK_CSET <= t)
            {
              /* If T is a singleton, or if case-folding in a unibyte
                 locale and T's members all case-fold to the same char,
                 convert T to one of its members.  Otherwise, do
                 nothing further with T.  */
              charclass_t *ccl = charclass_get_pointer (t - FSATOKEN_TK_CSET);
              int j;
              for (j = 0; j < FSATOKEN_NOTCHAR; j++)
                if (charclass_tstbit (j, ccl))
                  break;
              if (! (j < FSATOKEN_NOTCHAR))
                break;
              t = j;
              while (++j < FSATOKEN_NOTCHAR)
                if (charclass_tstbit (j, ccl)
                    && ! (case_fold && unibyte_locale
                          && toupper (j) == toupper (t)))
                  break;
              if (j < FSATOKEN_NOTCHAR)
                break;
            }
          mp->is[0] = mp->left[0] = mp->right[0]
            = case_fold && unibyte_locale ? toupper (t) : t;
          mp->is[1] = mp->left[1] = mp->right[1] = '\0';
          mp->in = enlist (mp->in, mp->is, 1);
          if (mp->in == NULL)
            goto done;
          break;
        }
#ifdef DEBUG
      fprintf (stderr, " node: %zd:", ri);
      fsatoken_prtok (token_list[ri]);
      fprintf (stderr, "\n  in:");
      for (i = 0; mp->in[i]; ++i)
        fprintf (stderr, " \"%s\"", mp->in[i]);
      fprintf (stderr, "\n  is: \"%s\"\n", mp->is);
      fprintf (stderr, "  left: \"%s\"\n", mp->left);
      fprintf (stderr, "  right: \"%s\"\n", mp->right);
#endif
      ++mp;
    }
done:
  if (strlen (result))
    {
      fsamusts_list_element_t *dm;

      MALLOC (dm, 1);
      dm->exact = exact;
      dm->must = xmemdup (result, strlen (result) + 1);
      dm->next = must_list;
      must_list = dm;
    }
  mp = musts;
  for (i = 0; i <= nr_tokens; ++i)
    {
      freelist (mp[i].in);
      free (mp[i].in);
      free (mp[i].left);
      free (mp[i].right);
      free (mp[i].is);
    }
  free (mp);

  return must_list;
}

/* vim:set shiftwidth=2: */

--------------060207090305090503050301
Content-Type: text/x-chdr;
 name="fsamusts.h"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="fsamusts.h"

/* fsamusts -- Report a list of must-have simple strings in the pattern

   Copyright (C) 1988, 1998, 2000, 2002, 2004-2005, 2007-2014 Free Software
   Foundation, Inc.

   This program is free software; you can redistribute it and/or modify
   it under the terms of the GNU General Public License as published by
   the Free Software Foundation; either version 3, or (at your option)
   any later version.

   This program is distributed in the hope that it will be useful,
   but WITHOUT ANY WARRANTY; without even the implied warranty of
   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
   GNU General Public License for more details.

   You should have received a copy of the GNU General Public License
   along with this program; if not, write to the Free Software
   Foundation, Inc.,
   51 Franklin Street - Fifth Floor, Boston, MA  02110-1301, USA */

/* Written June, 1988 by Mike Haertel
   Modified July, 1988 by Arthur David Olson to assist BMG speedups  */

/* 2014: Repackaged by "untangle" script, written by behoffski.  */

/* ?? Insert long description/discussion here.  */

#ifndef FSAMUSTS_H
#define FSAMUSTS_H 1

/* Always import environment-specific configuration items first.  */
#include <config.h>

#include "fsatoken.h"

/* Element of a list of strings, at least one of which is known to
   appear in any R.E. matching the DFA. */
typedef struct fsamusts_list_element
{
  int exact;
  char *must;
  struct fsamusts_list_element *next;
} fsamusts_list_element_t;

/* Receive an existing list (possibly empty) of must-have strings, together
   with a list of the tokens for the current FSA (postfix tree order), and
   if there are any more must-have strings in the token list, add them to
   the must-have list.  Returns the possibly-modified list to the caller.
   Locale and syntax items are partially covered here by the case_fold and
   unibyte_locale flags, but this is incomplete, and should be addressed by
   Stage 2 (improving the expressiveness of tokens).  */
extern fsamusts_list_element_t *
fsamusts_must (fsamusts_list_element_t *must_list,
              size_t nr_tokens, fsatoken_token_t *token_list,
              bool case_fold, bool unibyte_locale);

#endif /* FSAMUSTS_H */

/* vim:set shiftwidth=2: */

--------------060207090305090503050301
Content-Type: text/x-csrc;
 name="fsaparse.c"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="fsaparse.c"

/* fsaparse -- Build a structure naming relationships (sequences, alternatives,
               backreferences, options and precedence) of tokens

   Copyright (C) 1988, 1998, 2000, 2002, 2004-2005, 2007-2014 Free Software
   Foundation, Inc.

   This program is free software; you can redistribute it and/or modify
   it under the terms of the GNU General Public License as published by
   the Free Software Foundation; either version 3, or (at your option)
   any later version.

   This program is distributed in the hope that it will be useful,
   but WITHOUT ANY WARRANTY; without even the implied warranty of
   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
   GNU General Public License for more details.

   You should have received a copy of the GNU General Public License
   along with this program; if not, write to the Free Software
   Foundation, Inc.,
   51 Franklin Street - Fifth Floor, Boston, MA  02110-1301, USA */

/* Written June, 1988 by Mike Haertel
   Modified July, 1988 by Arthur David Olson to assist BMG speedups  */

/* 2014: Repackaged by "untangle" script, written by behoffski.  */

/* This function receives a stream of tokens from fsalex, and processes
   them to impose precedence rules and to describe complex pattern elements
   that are beyond the capability of the simple lexer.  In addition to the
   cases explicit in the syntax (e.g."(ab|c)", variable-length multibyte
   encodings (UTF-8; codesets including modifiers and/or shift items) also
   require these enhanced facilities.  */

/* Always import environment-specific configuration items first.  */
#include <config.h>

#include <assert.h>
#include "charclass.h"
#include "fsaparse.h"
#include "fsalex.h"
#include "fsatoken.h"
#include "proto-lexparse.h"
#include <stdbool.h>
#include "xalloc.h"

/* gettext.h ensures that we don't use gettext if ENABLE_NLS is not defined */
#include "gettext.h"
#define _(str) gettext (str)

#include <wchar.h>
#include <wctype.h>

/* Reallocate an array of type *P if N_ALLOC is <= N_REQUIRED.  */
#define REALLOC_IF_NECESSARY(p, n_alloc, n_required)		\
  do								\
    {								\
      if ((n_alloc) <= (n_required))				\
        {							\
          size_t new_n_alloc = (n_required) + !(p);		\
          (p) = x2nrealloc (p, &new_n_alloc, sizeof (*(p)));	\
          (n_alloc) = new_n_alloc;				\
        }							\
    }								\
  while (0)

#if HAVE_LANGINFO_CODESET
# include <langinfo.h>
#endif

/* ?? Sigh... wanted to keep multibyte code in fsaPARSE(!) to a minimum, but
   a LOT of code breaks if struct mb_char_classes isn't defined.  */

/* A bracket operator.
   e.g., [a-c], [[:alpha:]], etc.  */
struct mb_char_classes
{
  ptrdiff_t cset;
  bool invert;
  wchar_t *chars;               /* Normal characters.  */
  size_t nchars;
  wctype_t *ch_classes;         /* Character classes.  */
  size_t nch_classes;
  wchar_t *range_sts;           /* Range characters (start of the range).  */
  wchar_t *range_ends;          /* Range characters (end of the range).  */
  size_t nranges;
  char **equivs;                /* Equivalence classes.  */
  size_t nequivs;
  char **coll_elems;
  size_t ncoll_elems;           /* Collating elements.  */
};

/* fsaparse_ctxt: Gather all the context to do with the parser into a single
   struct.  We do this mainly because it make it easier to contemplate
   having multiple instances of this module running in parallel, but also
   because it makes translating from "dfa->" easier.  This definition
   fleshes out the opaque type given in the module header.  */
struct fsaparse_ctxt_struct
{
  /* Warning and abot functions provided by client.  */
  fsalex_warn_callback_fn *warn_client;
  fsalex_error_callback_fn *abandon_with_error;

  /* Plug-in functions and context to deal with lexer at arm's length.  */
  proto_lexparse_lex_fn_t *lexer;
  proto_lexparse_exchange_fn_t *lex_exchange;
  void *lex_context;

  /* Information about locale (needs to sync with lexer...?) */
  bool multibyte_locale;
  bool unibyte_locale;

  fsatoken_token_t lookahead_token;

  size_t current_depth;      /* Current depth of a hypothetical stack
                             holding deferred productions.  This is
                             used to determine the depth that will be
                             required of the real stack later on in
                             dfaanalyze.  */
  /* Fields filled by the parser.  */
  fsatoken_token_t *tokens;                /* Postfix parse array.  */
  size_t tindex;                /* Index for adding new tokens.  */
  size_t talloc;                /* Number of tokens currently allocated.  */
  size_t depth;                 /* Depth required of an evaluation stack
                                   used for depth-first traversal of the
                                   parse tree.  */
  size_t nleaves;               /* Number of leaves on the parse tree.  */
  size_t nregexps;              /* Count of parallel regexps being built
                                   with dfaparse.  */
  unsigned int mb_cur_max;      /* Cached value of MB_CUR_MAX.  */
  fsatoken_token_t utf8_anychar_classes[5]; /* To lower ANYCHAR in UTF-8 locales.  */
  /* The following are used only if MB_CUR_MAX > 1.  */

  /* The value of multibyte_prop[i] is defined by following rule.
     if tokens[i] < NOTCHAR
     bit 0 : tokens[i] is the first byte of a character, including
     single-byte characters.
     bit 1 : tokens[i] is the last byte of a character, including
     single-byte characters.

     if tokens[i] = MBCSET
     ("the index of mbcsets corresponding to this operator" << 2) + 3

     e.g.
     tokens
     = 'single_byte_a', 'multi_byte_A', single_byte_b'
     = 'sb_a', 'mb_A(1st byte)', 'mb_A(2nd byte)', 'mb_A(3rd byte)', 'sb_b'
     multibyte_prop
     = 3     , 1               ,  0              ,  2              , 3
   */
  size_t nmultibyte_prop;
  int *multibyte_prop;
  /* Array of the bracket expression in the DFA.  */
  struct mb_char_classes *mbcsets;
  size_t nmbcsets;
  size_t mbcsets_alloc;
};

/* UTF-8 encoding allows some optimizations that we can't otherwise
   assume in a multibyte encoding.  */
static int
using_utf8 (void)
{
  static int utf8 = -1;
  if (utf8 < 0)
    {
      wchar_t wc;
      mbstate_t mbs = { 0 };
      utf8 = mbrtowc (&wc, "\xc4\x80", 2, &mbs) == 2 && wc == 0x100;
    }
  return utf8;
}

/* Recursive descent parser for regular expressions.  */

static void
addtok_mb (fsaparse_ctxt_t *parser, fsatoken_token_t t, int mbprop)
{
  if (parser->multibyte_locale)
    {
      REALLOC_IF_NECESSARY (parser->multibyte_prop, parser->nmultibyte_prop,
                            parser->tindex + 1);
      parser->multibyte_prop[parser->tindex] = mbprop;
    }

  REALLOC_IF_NECESSARY (parser->tokens, parser->talloc, parser->tindex + 1);
  parser->tokens[parser->tindex++] = t;

  switch (t)
    {
    case FSATOKEN_TK_QMARK:
    case FSATOKEN_TK_STAR:
    case FSATOKEN_TK_PLUS:
      break;

    case FSATOKEN_TK_CAT:
    case FSATOKEN_TK_OR:
      --parser->current_depth;
      break;

    default:
      ++parser->nleaves;
    case FSATOKEN_TK_EMPTY:
      ++parser->current_depth;
      break;
    }
  if (parser->depth < parser->current_depth)
    parser->depth = parser->current_depth;
}

static void addtok_wc (fsaparse_ctxt_t *parser, wint_t wc);

/* Add the given token to the parse tree, maintaining the depth count and
   updating the maximum depth if necessary.  */
static void
addtok (fsaparse_ctxt_t *parser, fsatoken_token_t t)
{
  if (parser->multibyte_locale && t == FSATOKEN_TK_MBCSET)
    {
      bool need_or = false;
      struct mb_char_classes *work_mbc = &parser->mbcsets[parser->nmbcsets - 1];

      /* Extract wide characters into alternations for better performance.
         This does not require UTF-8.  */
      if (!work_mbc->invert)
        {
          size_t i;
          for (i = 0; i < work_mbc->nchars; i++)
            {
              addtok_wc (parser, work_mbc->chars[i]);
              if (need_or)
                addtok (parser, FSATOKEN_TK_OR);
              need_or = true;
            }
          work_mbc->nchars = 0;
        }

      /* If the FSATOKEN_TK_MBCSET is non-inverted and doesn't include neither
         character classes including multibyte characters, range
         expressions, equivalence classes nor collating elements,
         it can be replaced to a simple FSATOKEN_TK_CSET. */
      if (work_mbc->invert
          || work_mbc->nch_classes != 0
          || work_mbc->nranges != 0
          || work_mbc->nequivs != 0 || work_mbc->ncoll_elems != 0)
        {
          addtok_mb (parser, FSATOKEN_TK_MBCSET, ((parser->nmbcsets - 1) << 2) + 3);
          if (need_or)
            addtok (parser, FSATOKEN_TK_OR);
        }
      else
        {
          /* Characters have been handled above, so it is possible
             that the mbcset is empty now.  Do nothing in that case.  */
          if (work_mbc->cset != -1)
            {
              addtok (parser, FSATOKEN_TK_CSET + work_mbc->cset);
              if (need_or)
                addtok (parser, FSATOKEN_TK_OR);
            }
        }
    }
  else
    {
      addtok_mb (parser, t, 3);
    }
}
/* We treat a multibyte character as a single atom, so that DFA
   can treat a multibyte character as a single expression.

   e.g., we construct the following tree from "<mb1><mb2>".
   <mb1(1st-byte)><mb1(2nd-byte)><FSATOKEN_TK_CAT><mb1(3rd-byte)><FSATOKEN_TK_CAT>
   <mb2(1st-byte)><mb2(2nd-byte)><FSATOKEN_TK_CAT><mb2(3rd-byte)><FSATOKEN_TK_CAT><FSATOKEN_TK_CAT> */
static void
addtok_wc (fsaparse_ctxt_t *parser, wint_t wc)
{
  unsigned char buf[MB_LEN_MAX];
  mbstate_t s = { 0 };
  int i;
  int cur_mb_len;
  size_t stored_bytes = wcrtomb ((char *) buf, wc, &s);

  if (stored_bytes != (size_t) -1)
    cur_mb_len = stored_bytes;
  else
    {
      /* This is merely stop-gap.  buf[0] is undefined, yet skipping
         the addtok_mb call altogether can corrupt the heap.  */
      cur_mb_len = 1;
      buf[0] = 0;
    }

  addtok_mb (parser, buf[0], cur_mb_len == 1 ? 3 : 1);
  for (i = 1; i < cur_mb_len; i++)
    {
      addtok_mb (parser, buf[i], i == cur_mb_len - 1 ? 2 : 0);
      addtok (parser, FSATOKEN_TK_CAT);
    }
}
static void
add_utf8_anychar (fsaparse_ctxt_t *parser)
{
  unsigned int i;

  /* Have we set up the classes for the 1-byte to 4-byte sequence types?  */
  if (parser->utf8_anychar_classes[0] == 0)
    {
      /* No, first time we've been called, so set them up now.  */
      charclass_t *ccl;
      const charclass_t *dotclass;

      /* Index 0: 80-bf -- Non-leading bytes.  */
      ccl = charclass_alloc ();
      charclass_setbit_range (0x80, 0xbf, ccl);
      parser->utf8_anychar_classes[0] = charclass_finalise (ccl);

      /* Index 1: 00-7f -- 1-byte leading seq, minus dotclass exceptions.  */
      ccl = charclass_alloc ();
      charclass_setbit_range (0x00, 0x7f, ccl);
      fsalex_exchange (parser->lex_context, PROTO_LEXPARSE_OP_GET_DOTCLASS,
                       &dotclass);
      charclass_intersectset (dotclass, ccl);
      parser->utf8_anychar_classes[1] = charclass_finalise (ccl);

      /* Index 2: c2-df -- 2-byte sequence.  */
      ccl = charclass_alloc ();
      charclass_setbit_range (0xc2, 0xdf, ccl);
      parser->utf8_anychar_classes[2] = charclass_finalise (ccl);

      /* Index 2: e0-ef -- 3-byte sequence.  */
      ccl = charclass_alloc ();
      charclass_setbit_range (0xe0, 0xef, ccl);
      parser->utf8_anychar_classes[3] = charclass_finalise (ccl);

      /* Index 2: f0-f7 -- 4-byte sequence.  */
      ccl = charclass_alloc ();
      charclass_setbit_range (0xf0, 0xf7, ccl);
      parser->utf8_anychar_classes[4] = charclass_finalise (ccl);
    }

  /* A valid UTF-8 character is

     ([0x00-0x7f]
     |[0xc2-0xdf][0x80-0xbf]
     |[0xe0-0xef[0x80-0xbf][0x80-0xbf]
     |[0xf0-f7][0x80-0xbf][0x80-0xbf][0x80-0xbf])

     which I'll write more concisely "B|CA|DAA|EAAA".  Factor the [0x00-0x7f]
     and you get "B|(C|(D|EA)A)A".  And since the token buffer is in reverse
     Polish notation, you get "B C D E A CAT OR A CAT OR A CAT OR".  */
  /* Write out leaf tokens for each of the four possible starting bytes.  */
  for (i = 1; i < 5; i++)
    addtok (parser, FSATOKEN_TK_CSET + parser->utf8_anychar_classes[i]);
  /* Add follow-on classes, plus tokens to build a postfix tree covering all
     four alternatives of valid UTF-8 sequences.  */
  for (i = 1; i <= 3; i++)
    {
      addtok (parser, FSATOKEN_TK_CSET + parser->utf8_anychar_classes[0]);
      addtok (parser, FSATOKEN_TK_CAT);
      addtok (parser, FSATOKEN_TK_OR);
    }
}

/* The grammar understood by the parser is as follows.

   regexp:
     regexp FSATOKEN_TK_OR branch
     branch

   branch:
     branch closure
     closure

   closure:
     closure FSATOKEN_TK_QMARK
     closure FSATOKEN_TK_STAR
     closure FSATOKEN_TK_PLUS
     closure FSATOKEN_TK_REPMN
     atom

   atom:
     <normal character>
     <multibyte character>
     FSATOKEN_TK_ANYCHAR
     FSATOKEN_TK_MBCSET
     FSATOKEN_TK_CSET
     FSATOKEN_TK_BACKREF
     FSATOKEN_TK_BEGLINE
     FSATOKEN_TK_ENDLINE
     FSATOKEN_TK_BEGWORD
     FSATOKEN_TK_ENDWORD
     FSATOKEN_TK_LIMWORD
     FSATOKEN_TK_NOTLIMWORD
     FSATOKEN_TK_LPAREN regexp FSATOKEN_TK_RPAREN
     <empty>

   The parser builds a parse tree in postfix form in an array of tokens.  */

/* Provide a forward declaration for regexp, as it is at the top of the
   parse tree, but is referenced by atom, at the bottom of the tree.  */
static void regexp (fsaparse_ctxt_t *parser);

static void
atom (fsaparse_ctxt_t *parser)
{
  fsatoken_token_t tok = parser->lookahead_token;

  if (tok == FSATOKEN_TK_WCHAR)
    {
      wchar_t wctok;
      int i, n;
      wchar_t folded[FSALEX_CASE_FOLDED_BUFSIZE];

      fsalex_exchange (parser->lex_context, PROTO_LEXPARSE_OP_GET_WIDE_CHAR,
                       &wctok);
      addtok_wc (parser, wctok);

      n = fsalex_case_folded_counterparts (parser->lex_context,
                                           wctok, folded);
      for (i = 0; i < n; i++)
        {
          addtok_wc (parser, folded[i]);
          addtok (parser, FSATOKEN_TK_OR);
        }

      parser->lookahead_token = parser->lexer (parser->lex_context);
    }
  else if (tok == FSATOKEN_TK_ANYCHAR && using_utf8 ())
    {
      /* For UTF-8 expand the period to a series of CSETs that define a valid
         UTF-8 character.  This avoids using the slow multibyte path.  I'm
         pretty sure it would be both profitable and correct to do it for
         any encoding; however, the optimization must be done manually as
         it is done above in add_utf8_anychar.  So, let's start with
         UTF-8: it is the most used, and the structure of the encoding
         makes the correctness more obvious.  */
     add_utf8_anychar (parser);
      parser->lookahead_token = parser->lexer (parser->lex_context);
    }
  else if ((tok >= 0 && tok < FSATOKEN_NOTCHAR) || tok >= FSATOKEN_TK_CSET || tok == FSATOKEN_TK_BACKREF
           || tok == FSATOKEN_TK_BEGLINE || tok == FSATOKEN_TK_ENDLINE || tok == FSATOKEN_TK_BEGWORD
           || tok == FSATOKEN_TK_ANYCHAR || tok == FSATOKEN_TK_MBCSET
           || tok == FSATOKEN_TK_ENDWORD || tok == FSATOKEN_TK_LIMWORD || tok == FSATOKEN_TK_NOTLIMWORD)
    {
      addtok (parser, tok);
      parser->lookahead_token = parser->lexer (parser->lex_context);
    }
  else if (tok == FSATOKEN_TK_LPAREN)
    {
      parser->lookahead_token = parser->lexer (parser->lex_context);
      regexp (parser);
      tok = parser->lookahead_token;
      if (tok != FSATOKEN_TK_RPAREN)
        parser->abandon_with_error (_("unbalanced ("));
      parser->lookahead_token = parser->lexer (parser->lex_context);
    }
  else
    addtok (parser, FSATOKEN_TK_EMPTY);
}

/* Return the number of tokens in the given subexpression.  */
static size_t _GL_ATTRIBUTE_PURE
nsubtoks (fsaparse_ctxt_t *parser, size_t tindex)
{
  size_t ntoks1;

  switch (parser->tokens[tindex - 1])
    {
    default:
      return 1;
    case FSATOKEN_TK_QMARK:
    case FSATOKEN_TK_STAR:
    case FSATOKEN_TK_PLUS:
      return 1 + nsubtoks (parser, tindex - 1);
    case FSATOKEN_TK_CAT:
    case FSATOKEN_TK_OR:
      ntoks1 = nsubtoks (parser, tindex - 1);
      return 1 + ntoks1 + nsubtoks (parser, tindex - 1 - ntoks1);
    }
}

/* Copy the given subexpression to the top of the tree.  */
static void
copytoks (fsaparse_ctxt_t *parser, size_t tindex, size_t ntokens)
{
  size_t i;

  if (parser->multibyte_locale)
    for (i = 0; i < ntokens; ++i)
      addtok_mb (parser, parser->tokens[tindex + i], parser->multibyte_prop[tindex + i]);
  else
    for (i = 0; i < ntokens; ++i)
      addtok_mb (parser, parser->tokens[tindex + i], 3);
}

/* Rewriting fsaparse:closure () from scratch; original is clever but a
   little tricky to follow, so I'm trying to break up a while + compound-if
   loop into a simpler construct (more like a finite-state machine).  Also,
   edits such as replacing "dfa->" with "parser->" are done here, adding
   "parser" as a parameter in lots of places, as well as the long-winded
   FSATOKEN_TK_" prefix.

   I'm not sure if this version is an improvement over the original; the
   need to use "parser->lookahead_token" instead of "tok" influenced my
   decision to try this... but the jury is still out.  */
static void
closure (fsaparse_ctxt_t *parser)
{
restart_closure:
  atom (parser);
  for (;;)
    {
      switch (parser->lookahead_token)
        {
          case FSATOKEN_TK_QMARK:
          case FSATOKEN_TK_STAR:
          case FSATOKEN_TK_PLUS:
            addtok (parser, parser->lookahead_token);
            parser->lookahead_token = parser->lexer (parser->lex_context);
            continue;

          case FSATOKEN_TK_REPMN:
            /* REPMN needs extra work; move outside the switch statement.  */
            break;

          default:
            /* Merely let the intial atom call stand as our return result.  */
            return;
        }

      /* Deal with REPMN{min, max} cases in a separate block.  */
      {
        int i;
        size_t prev_sub_index, ntokens;
        int minrep, maxrep;

        /* Get the {min, max} pair decoded by the lexer.  */
        minrep = parser->lex_exchange (parser->lex_context,
                                       PROTO_LEXPARSE_OP_GET_REPMN_MIN,
                                       NULL);
        maxrep = parser->lex_exchange (parser->lex_context,
                                       PROTO_LEXPARSE_OP_GET_REPMN_MAX,
                                       NULL);

        /* Find out how many tokens are in the peceding token list that are
           covered by this REPMN directive.  This involves carefully working
           backwards through the linear, postfix token ordering.  */
        ntokens = nsubtoks (parser, parser->tindex);

        /* If min and max are both zero, merely remove preceding
           subexpression, get a new token, and restart the atom/closure
           processing from the top of the function.  Not sure if people will
           like this goto statement, but we'll give it a whirl.   */
        if (minrep == 0 && maxrep == 0)
          {
            parser->tindex -= ntokens;
            parser->lookahead_token = parser->lexer (parser->lex_context);
            goto restart_closure;
          }

        /* Non-zero min or max, defined as follows:
             {n}   The preceding item is matched exactly n times.
             {n,}  The preceding item is matched n or more times.
             {,m}  The preceding item is matched at most m times (GNU ext.)
             {n,m} The preceding item is matched at least n, but not more
                   than m times.
           For {n,} and {,m} cases, the omitted parameter is reported here
           as a negative value.  */
        prev_sub_index = parser->tindex - ntokens;
        if (maxrep < 0)
          addtok (parser, FSATOKEN_TK_PLUS);
        if (minrep == 0)
          addtok (parser, FSATOKEN_TK_QMARK);
        for (i = 1; i < minrep; ++i)
          {
            copytoks (parser, prev_sub_index, ntokens);
            addtok (parser, FSATOKEN_TK_CAT);
          }
        for (; i < maxrep; ++i)
          {
            copytoks (parser, prev_sub_index, ntokens);
            addtok (parser, FSATOKEN_TK_QMARK);
            addtok (parser, FSATOKEN_TK_CAT);
          }
        /* Prime the parser with the next token after REPMN and loop.  */
        parser->lookahead_token = parser->lexer (parser->lex_context);
      }
    }
}

static void
branch (fsaparse_ctxt_t *parser)
{
  fsatoken_token_t tok;

  closure (parser);
  tok = parser->lookahead_token;
  while (tok != FSATOKEN_TK_RPAREN && tok != FSATOKEN_TK_OR && tok >= 0)
    {
      closure (parser);
      tok = parser->lookahead_token;
      addtok (parser, FSATOKEN_TK_CAT);
    }
}

static void
regexp (fsaparse_ctxt_t *parser)
{
  branch (parser);
  while (parser->lookahead_token == FSATOKEN_TK_OR)
    {
      parser->lookahead_token = parser->lexer (parser->lex_context);
      branch (parser);
      addtok (parser, FSATOKEN_TK_OR);
    }
}

/* Main entry point for the parser.  Parser is a pointer to a parser
   context struct created by fsaparse_new.  Before calling this function,
   the parser instance must be supplied with a lexer (fsaparse_lexer), and
   also with callback functions to receive warning and error reports
   (fsaparse_esception_fns).  */
void
fsaparse_parse (fsaparse_ctxt_t *parser)
{
  /* Obtain an initial token for lookahead, and keep tracking tree depth.  */
  parser->lookahead_token = parser->lexer (parser->lex_context);
  parser->current_depth = parser->depth;

  /* Run regexp to manage the next level of parsing.  */
  regexp (parser);
  if (parser->lookahead_token != FSATOKEN_TK_END)
    parser->abandon_with_error (_("unbalanced )"));

  /* If multiple expressions are parsed, second and subsequent patters are
     presented as alternatives to preceding patterns.  */
  addtok (parser, FSATOKEN_TK_END - parser->nregexps);
  addtok (parser, FSATOKEN_TK_CAT);
  if (parser->nregexps)
    addtok (parser, FSATOKEN_TK_OR);

  ++parser->nregexps;
}

/* Receive functions to deal with exceptions detected by the parser:
   Warnings and errors.  Internally, we add the _Noreturn attribute
   to the error callback, to help the compiler with code flow
   analysis.  */
extern void
fsaparse_exception_fns (fsaparse_ctxt_t *parser,
                      fsaparse_warn_callback_fn *warningfn,
                      fsaparse_error_callback_fn *errorfn)
{
  /* Exception handling is done by explicit callbacks.  */
  parser->warn_client = warningfn;
  parser->abandon_with_error = errorfn;
}

/* Add "not provided!" stub function that gets called if the client
   fails to provide proper resources.  This is a hack, merely to get the
   module started; better treatment needs to be added later.  */
static void
no_function_provided (void *unused)
{
 assert (!"fsaparse: Plug-in function required, but not provided.");
}

/* Receiver a lexer function, plus lexer instance context pointer, for use by
   the parser.  Although not needed initially, this plug-in architecture may
   be useful in the future, and it breaks up some of the intricate
   connections that made the original dfa.c code so daunting.  */
void
fsaparse_lexer (fsaparse_ctxt_t *parser,
                void *lexer_context,
                proto_lexparse_lex_fn_t *lex_fn,
                proto_lexparse_exchange_fn_t *lex_exchange_fn)
{
  bool is_multibyte;

  /* Record supplied lexer function and context for use later.  */
  parser->lex_context  = lexer_context;
  parser->lexer        = lex_fn;
  parser->lex_exchange = lex_exchange_fn;

  /* Query lexer to get multibyte nature of this locale.  */
  is_multibyte = lex_exchange_fn (lexer_context,
                                  PROTO_LEXPARSE_OP_GET_IS_MULTIBYTE_ENV,
                                  NULL);
  parser->multibyte_locale = is_multibyte;
  parser->unibyte_locale = ! is_multibyte;
}
/* Generate a new instance of an FSA parser.  */
fsaparse_ctxt_t *
fsaparse_new (void)
{
  fsaparse_ctxt_t *new_context;

  /* Acquire zeroed memory for new parser context.  */
  new_context = XZALLOC (fsaparse_ctxt_t);

  /* ?? Point warning, error and lexer functions to a "you need to tell me
     these first!" function? */
  new_context->warn_client        = (fsaparse_warn_callback_fn *)
                                    no_function_provided;
  new_context->abandon_with_error = (fsaparse_error_callback_fn *)
                                    no_function_provided;
  new_context->lexer              = (fsaparse_lexer_fn_t  *)
                                    no_function_provided;

  /* Default to unibyte locale... but we should synchronise with lexer. */
  new_context->multibyte_locale = false;
  new_context->unibyte_locale = true;

  return new_context;
}
/* After parsing, report a list of tokens describing the pattern.  Complex
   structures such as alternation, backreferences, and locale-induced
   complexity such as variable-length utf8 sequences are described here by
   appending operators that apply to the preceding item(s) (postfix
   notation).  */
void
fsaparse_get_token_list (fsaparse_ctxt_t *parser,
                         size_t *nr_tokens,
                         fsatoken_token_t **token_list)
{
  *nr_tokens  = parser->tindex;
  *token_list = parser->tokens;
}
/* vim:set shiftwidth=2: */

--------------060207090305090503050301
Content-Type: text/x-chdr;
 name="fsaparse.h"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="fsaparse.h"

/* fsaparse -- Build a structure naming relationships (sequences, alternatives, options and precedence) of tokens

   Copyright (C) 1988, 1998, 2000, 2002, 2004-2005, 2007-2014 Free Software
   Foundation, Inc.

   This program is free software; you can redistribute it and/or modify
   it under the terms of the GNU General Public License as published by
   the Free Software Foundation; either version 3, or (at your option)
   any later version.

   This program is distributed in the hope that it will be useful,
   but WITHOUT ANY WARRANTY; without even the implied warranty of
   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
   GNU General Public License for more details.

   You should have received a copy of the GNU General Public License
   along with this program; if not, write to the Free Software
   Foundation, Inc.,
   51 Franklin Street - Fifth Floor, Boston, MA  02110-1301, USA */

/* Written June, 1988 by Mike Haertel
   Modified July, 1988 by Arthur David Olson to assist BMG speedups  */

/* 2014: Repackaged by "untangle" script, written by behoffski.  */

/* This function receives a stream of tokens from fsalex, and processes
   them to impose precedence rules and to describe complex pattern elements
   that are beyond the capability of the simple lexer.  In addition to the
   cases explicit in the syntax (e.g."(ab|c)", variable-length multibyte
   encodings (UTF-8; codesets including modifiers and/or shift items) also
   require these enhanced facilities.  */


#ifndef FSAPARSE_H
#define FSAPARSE_H 1

/* Always import environment-specific configuration items first.  */
#include <config.h>

#include "fsatoken.h"
#include "proto-lexparse.h"

/* Multiple parser instances can exist in parallel, so define an opaque
   type to collect together all the context relating to each instance.  */
typedef struct fsaparse_ctxt_struct fsaparse_ctxt_t;

/* Allow configurable parser/lexer combinations by using a plugin interface
   for lexer invocation.  */
typedef fsatoken_token_t
fsaparse_lexer_fn_t (void *lexer_context);

/* Generate a new instance of an FSA parser.  */
extern fsaparse_ctxt_t *
fsaparse_new (void);

/* Receiver a lexer function, plus lexer instance context pointer, for use by
   the parser.  Although not needed initially, this plug-in architecture may
   be useful in the future, and it breaks up some of the intricate
   connections that made the original dfa.c code so daunting.  */
extern void
fsaparse_lexer (fsaparse_ctxt_t *parser,
                void *lexer_context,
                proto_lexparse_lex_fn_t *lex_fn,
                proto_lexparse_exchange_fn_t *lex_exchange_fn);

/* Define function prototypes for warning and error callbacks.  */
typedef void
fsaparse_warn_callback_fn (const char *);
typedef void /* ?? _Noreturn? */
fsaparse_error_callback_fn (const char *);

/* Receive functions to deal with exceptions detected by the parser:
   Warnings and errors.  Internally, we add the _Noreturn attribute
   to the error callback, to help the compiler with code flow
   analysis.  */
extern void
fsaparse_exception_fns (fsaparse_ctxt_t *parser,
                      fsaparse_warn_callback_fn *warningfn,
                      fsaparse_error_callback_fn *errorfn);

/* Main entry point for the parser.  Parser is a pointer to a parser
   context struct created by fsaparse_new.  Before calling this function,
   the parser instance must be supplied with a lexer (fsaparse_lexer), and
   also with callback functions to receive warning and error reports
   (fsaparse_esception_fns).  */
extern void
fsaparse_parse (fsaparse_ctxt_t *parser);

/* After parsing, report a list of tokens describing the pattern.  Complex
   structures such as alternation, backreferences, and locale-induced
   complexity such as variable-length utf8 sequences are described here by
   appending operators that apply to the preceding item(s) (postfix
   notation).  */
extern void
fsaparse_get_token_list (fsaparse_ctxt_t *parser,
                         size_t *nr_tokens,
                         fsatoken_token_t **token_list);

#endif /* FSAPARSE_H */

/* vim:set shiftwidth=2: */

--------------060207090305090503050301
Content-Type: text/x-csrc;
 name="fsatoken.c"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="fsatoken.c"

/* fsatoken - Support routines specific to token definitions

   Copyright (C) 1988, 1998, 2000, 2002, 2004-2005, 2007-2014 Free Software
   Foundation, Inc.

   This program is free software; you can redistribute it and/or modify
   it under the terms of the GNU General Public License as published by
   the Free Software Foundation; either version 3, or (at your option)
   any later version.

   This program is distributed in the hope that it will be useful,
   but WITHOUT ANY WARRANTY; without even the implied warranty of
   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
   GNU General Public License for more details.

   You should have received a copy of the GNU General Public License
   along with this program; if not, write to the Free Software
   Foundation, Inc.,
   51 Franklin Street - Fifth Floor, Boston, MA  02110-1301, USA */

/* Written June, 1988 by Mike Haertel
   Modified July, 1988 by Arthur David Olson to assist BMG speedups  */

/* 2014: Repackaged by "untangle" script, written by behoffski.  */

/* The majority of the fsatoken[ch] module is in fsatoken.h, as it is
   shared by other modules.  This file provides token-specific support
   functions, such as functions to print tokens (for debugging).

   Although there is a relationship between some generic constructs
   such as character classes and the CSET token defined here, the generic
   items are defined in a separate support library, not in this module.
   This is because these tokens are very FSA/grep-specific, whereas the
   generic consructs are potentially widely useable, and may even be
   amenable to hardware-specific optimisations (such as superscalar
   opcodes such as: and/or/set/clear/test-and-set/test-and-clear and/or
   bit counting operations).  */

/* Always import environment-specific configuration items first.  */
#include <config.h>

#include "fsatoken.h"
#include <stdio.h>

#ifdef DEBUG

void
fsatoken_prtok (fsatoken_token_t t)
{
  char const *s;

  if (t < 0)
    fprintf (stderr, "FSATOKEN_TK_END");
  else if (t < FSATOKEN_NOTCHAR)
    {
      int ch = t;
      fprintf (stderr, "%c", ch);
    }
  else
    {
      switch (t)
        {
        case FSATOKEN_TK_EMPTY:
          s = "FSATOKEN_TK_EMPTY";
          break;
        case FSATOKEN_TK_BACKREF:
          s = "FSATOKEN_TK_BACKREF";
          break;
        case FSATOKEN_TK_BEGLINE:
          s = "FSATOKEN_TK_BEGLINE";
          break;
        case FSATOKEN_TK_ENDLINE:
          s = "FSATOKEN_TK_ENDLINE";
          break;
        case FSATOKEN_TK_BEGWORD:
          s = "FSATOKEN_TK_BEGWORD";
          break;
        case FSATOKEN_TK_ENDWORD:
          s = "FSATOKEN_TK_ENDWORD";
          break;
        case FSATOKEN_TK_LIMWORD:
          s = "FSATOKEN_TK_LIMWORD";
          break;
        case FSATOKEN_TK_NOTLIMWORD:
          s = "FSATOKEN_TK_NOTLIMWORD";
          break;
        case FSATOKEN_TK_QMARK:
          s = "FSATOKEN_TK_QMARK";
          break;
        case FSATOKEN_TK_STAR:
          s = "FSATOKEN_TK_STAR";
          break;
        case FSATOKEN_TK_PLUS:
          s = "FSATOKEN_TK_PLUS";
          break;
        case FSATOKEN_TK_CAT:
          s = "FSATOKEN_TK_CAT";
          break;
        case FSATOKEN_TK_OR:
          s = "FSATOKEN_TK_OR";
          break;
        case FSATOKEN_TK_LPAREN:
          s = "FSATOKEN_TK_LPAREN";
          break;
        case FSATOKEN_TK_RPAREN:
          s = "FSATOKEN_TK_RPAREN";
          break;
        case FSATOKEN_TK_ANYCHAR:
          s = "FSATOKEN_TK_ANYCHAR";
          break;
        case FSATOKEN_TK_MBCSET:
          s = "FSATOKEN_TK_MBCSET";
          break;
        default:
          s = "FSATOKEN_TK_CSET";
          break;
        }
      fprintf (stderr, "%s", s);
    }
}

#endif /* DEBUG */
/* vim:set shiftwidth=2: */

--------------060207090305090503050301
Content-Type: text/x-chdr;
 name="fsatoken.h"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="fsatoken.h"

/* fsatoken - Create tokens for a compact, coherent regular expression language

   Copyright (C) 1988, 1998, 2000, 2002, 2004-2005, 2007-2014 Free Software
   Foundation, Inc.

   This program is free software; you can redistribute it and/or modify
   it under the terms of the GNU General Public License as published by
   the Free Software Foundation; either version 3, or (at your option)
   any later version.

   This program is distributed in the hope that it will be useful,
   but WITHOUT ANY WARRANTY; without even the implied warranty of
   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
   GNU General Public License for more details.

   You should have received a copy of the GNU General Public License
   along with this program; if not, write to the Free Software
   Foundation, Inc.,
   51 Franklin Street - Fifth Floor, Boston, MA  02110-1301, USA */

/* Written June, 1988 by Mike Haertel
   Modified July, 1988 by Arthur David Olson to assist BMG speedups  */

/* 2014: Repackaged by "untangle" script, written by behoffski.  */

/* Regular expression patterns are presented as text, possibly ASCII; the
   format is very expressive, but this comes at the cost of being somewhat
   expensive to interpret (including identifying invalid patterns).  By
   tokenising the pattern, we make life much easier for the parser and
   other search machinery that follows.

   This file defines the tokens that we use, both for the benefit of the
   lexer/parser/dfa analyser that share this information, and for other
   machinery (such as the C compiler) that may need to store and/or
   manipulate these items.  */


#ifndef FSATOKEN_H
#define FSATOKEN_H 1

/* Always import environment-specific configuration items first.  */
#include <config.h>

/* Obtain definition of ptrdiff_t from stddef.h  */
#include <stddef.h>

/* C stream octets, and non-stream EOF, are self-representing tokens.
   We need to include stdio.h to obtain the definition of EOF.  */
#include <stdio.h>

/* Number of bits in an unsigned char.  */
#ifndef CHARBITS
# define CHARBITS 8
#endif

/* First integer value that is greater than any character code.  */
#define FSATOKEN_NOTCHAR (1 << CHARBITS)

/* The regexp is parsed into an array of tokens in postfix form.  Some tokens
   are operators and others are terminal symbols.  Most (but not all) of these
   codes are returned by the lexical analyzer.  */

typedef ptrdiff_t fsatoken_token_t;

/* Predefined token values.  */
enum
{
  FSATOKEN_TK_END = -1,                     /* FSATOKEN_TK_END is a terminal symbol that matches the
                                   end of input; any value of FSATOKEN_TK_END or less in
                                   the parse tree is such a symbol.  Accepting
                                   states of the DFA are those that would have
                                   a transition on FSATOKEN_TK_END.  */

  /* Ordinary character values are terminal symbols that match themselves.  */

  FSATOKEN_TK_EMPTY = FSATOKEN_NOTCHAR,              /* FSATOKEN_TK_EMPTY is a terminal symbol that matches
                                   the empty string.  */

  FSATOKEN_TK_BACKREF,                      /* FSATOKEN_TK_BACKREF is generated by \<digit>
                                   or by any other construct that
                                   is not completely handled.  If the scanner
                                   detects a transition on backref, it returns
                                   a kind of "semi-success" indicating that
                                   the match will have to be verified with
                                   a backtracking matcher.  */

  FSATOKEN_TK_BEGLINE,                      /* FSATOKEN_TK_BEGLINE is a terminal symbol that matches
                                   the empty string if it is at the beginning
                                   of a line.  */

  FSATOKEN_TK_ENDLINE,                      /* FSATOKEN_TK_ENDLINE is a terminal symbol that matches
                                   the empty string if it is at the end of
                                   a line.  */

  FSATOKEN_TK_BEGWORD,                      /* FSATOKEN_TK_BEGWORD is a terminal symbol that matches
                                   the empty string if it is at the beginning
                                   of a word.  */

  FSATOKEN_TK_ENDWORD,                      /* FSATOKEN_TK_ENDWORD is a terminal symbol that matches
                                   the empty string if it is at the end of
                                   a word.  */

  FSATOKEN_TK_LIMWORD,                      /* FSATOKEN_TK_LIMWORD is a terminal symbol that matches
                                   the empty string if it is at the beginning
                                   or the end of a word.  */

  FSATOKEN_TK_NOTLIMWORD,                   /* FSATOKEN_TK_NOTLIMWORD is a terminal symbol that
                                   matches the empty string if it is not at
                                   the beginning or end of a word.  */

  FSATOKEN_TK_QMARK,                        /* FSATOKEN_TK_QMARK is an operator of one argument that
                                   matches zero or one occurrences of its
                                   argument.  */

  FSATOKEN_TK_STAR,                         /* FSATOKEN_TK_STAR is an operator of one argument that
                                   matches the Kleene closure (zero or more
                                   occurrences) of its argument.  */

  FSATOKEN_TK_PLUS,                         /* FSATOKEN_TK_PLUS is an operator of one argument that
                                   matches the positive closure (one or more
                                   occurrences) of its argument.  */

  FSATOKEN_TK_REPMN,                        /* FSATOKEN_TK_REPMN is a lexical token corresponding
                                   to the {m,n} construct.  FSATOKEN_TK_REPMN never
                                   appears in the compiled token vector.  */

  FSATOKEN_TK_CAT,                          /* FSATOKEN_TK_CAT is an operator of two arguments that
                                   matches the concatenation of its
                                   arguments.  FSATOKEN_TK_CAT is never returned by the
                                   lexical analyzer.  */

  FSATOKEN_TK_OR,                           /* FSATOKEN_TK_OR is an operator of two arguments that
                                   matches either of its arguments.  */

  FSATOKEN_TK_LPAREN,                       /* FSATOKEN_TK_LPAREN never appears in the parse tree,
                                   it is only a lexeme.  */

  FSATOKEN_TK_RPAREN,                       /* FSATOKEN_TK_RPAREN never appears in the parse tree.  */

  FSATOKEN_TK_ANYCHAR,                      /* FSATOKEN_TK_ANYCHAR is a terminal symbol that matches
                                   a valid multibyte (or single byte) character.
                                   It is used only if MB_CUR_MAX > 1.  */

  FSATOKEN_TK_MBCSET,                       /* FSATOKEN_TK_MBCSET is similar to FSATOKEN_TK_CSET, but for
                                   multibyte characters.  */

  FSATOKEN_TK_WCHAR,                        /* Only returned by lex.  wctok contains
                                   the wide character representation.  */

  FSATOKEN_TK_CSET                          /* FSATOKEN_TK_CSET and (and any value greater) is a
                                   terminal symbol that matches any of a
                                   class of characters.  */
};


/* prtok - Display token name (for debugging) */
#ifdef DEBUG
extern void
fsatoken_prtok (fsatoken_token_t t);

#endif /* DEBUG */

#endif /* FSATOKEN_H */

/* vim:set shiftwidth=2: */

--------------060207090305090503050301
Content-Type: text/x-lua;
 name="strictness.lua"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="strictness.lua"

-- ================================================================
-- "strictness" tracks declaration and assignment of globals in Lua
-- Copyright (c) 2013 Roland Y., MIT License
-- v0.1.0 - compatible Lua 5.1, 5.2
-- ================================================================

local setmetatable = setmetatable
local getmetatable = getmetatable
local type = type
local rawget = rawget
local rawset = rawset
local unpack = unpack
local error = error
local getfenv = getfenv

-- ===================
-- Private helpers
-- ===================

-- Lua reserved keywords
local luaKeyword = {
  ['and'] = true,     ['break'] = true,   ['do'] = true,
  ['else'] = true,    ['elseif'] = true,  ['end'] = true ,
  ['false'] = true,   ['for'] = true,     ['function'] = true,
  ['if'] = true,      ['in'] = true,      ['local'] = true ,
  ['nil'] = true,     ['not'] = true ,    ['or'] = true,
  ['repeat'] = true,  ['return'] = true,  ['then'] = true ,
  ['true'] = true ,   ['until'] = true ,  ['while'] = true,
}

-- Register for declared globals, defined as a table
-- with weak values.
local declared_globals = setmetatable({},{__mode = 'v'})

-- The global env _G metatable
local _G_mt

-- A custom error function
local function err(msg, level)  return error(msg, level or 3) end

-- Custom assert with error level depth
local function assert(cond, msg, level)
  if not cond then
    return err(msg, level or 4)
  end
end

-- Custom argument type assertion helper
local function assert_type(var, expected_type, argn, level)
  local var_type = type(var)
  assert(var_type == expected_type,
    ('Bad argument #%d to global (%s expected, got %s)')
      :format(argn or 1, expected_type, var_type), level)
end

-- Checks in the register if the given global was declared
local function is_declared(varname)
  return declared_globals[varname]
end

-- Checks if the passed-in string can be a valid Lua identifier
local function is_valid_identifier(iden)
  return iden:match('^[%a_]+[%w_]*$') and not luaKeyword[iden]
end

-- ==========================
-- Module functions
-- ==========================

-- Allows the declaration of passed in varnames
local function declare_global(...)
  local vars = {...}
  assert(#vars > 0,
    'bad argument #1 to global (expected strings, got nil)')
  for i,varname in ipairs({...}) do
    assert_type(varname, 'string',i, 5)
    assert(is_valid_identifier(varname),
      ('bad argument #%d to global. "%s" is not a valid Lua identifier')
        :format(i, varname))
    declared_globals[varname] = true
  end
end

-- Allows the given function to write globals
local function declare_global_func(f)
  assert_type(f, 'function', nil, 5)
  return function(...)
    local old_index, old_newindex = _G_mt.__index, _G_mt.__newindex
    _G_mt.__index, _G_mt.__newindex = nil, nil
    local results = {f(...)}
    _G_mt.__index, _G_mt.__newindex = old_index, old_newindex
    return unpack(results)
  end
end

-- ==========================
-- Locking the global env _G
-- ==========================

do

  -- Catches the current env
  local ENV = _VERSION:match('5.2') and _G or getfenv()
  
  -- Preserves a possible existing metatable for the current env
  _G_mt = getmetatable(ENV)
  if not _G_mt then
    _G_mt = {}
    setmetatable(ENV,_G_mt)
  end

  -- Locks access to undeclared globals
  _G_mt.__index = function(env, varname)
    if not is_declared(varname) then
      err(('Attempt to read undeclared global variable "%s"')
        :format(varname))
    end
    return rawget(env, varname)
  end

  -- Locks assignment of undeclared globals
  _G_mt.__newindex = function(env, varname, value)
    if not is_declared(varname) then
      err(('Attempt to assign undeclared global variable "%s"')
        :format(varname))
    end
    rawset(env, varname, value)
  end
  
  rawset(ENV, 'global', declare_global)
  rawset(ENV, 'globalize', declare_global_func)

end
--------------060207090305090503050301
Content-Type: text/plain; charset=us-ascii;
 name="untangle"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="untangle"

#!/bin/env lua

--- Untangle -- Script to enable refactoring dfa.c (>4000 lines)

--[[
   Copyright (C) 2013-2014 Grouse Software.

   This program is free software; you can redistribute it and/or modify
   it under the terms of the GNU General Public License as published by
   the Free Software Foundation; either version 3, or (at your option)
   any later version.

   This program is distributed in the hope that it will be useful,
   but WITHOUT ANY WARRANTY; without even the implied warranty of
   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
   GNU General Public License for more details.

   You should have received a copy of the GNU General Public License
   along with this program; if not, write to the Free Software
   Foundation, Inc.,
   51 Franklin Street - Fifth Floor, Boston, MA  02110-1301, USA
--]]

-- Initial design and implementation by behoffski of Grouse Software,
-- 2013-2014.

--[[
   dfa.c is a particularly big and complex piece of software, and serves
   a massive set of use cases and machine environments.  However, to a
   limited extent, it has become prisoner of its own complexity:  While
   improvements in one area may be feasible, the intervening layers may
   re-code the pattern in a fashion that makes the improvement hard to
   implement.

   So, this script tries to untangle dfa.c a little:  To break the
   lexical analysis (token definition, char-class and multibyte-class
   storage management, pattern interpretation options) into separate
   module(s) (fsalex.[ch], fsatokens.h, fsautils.[ch], ... ? not sure).

   Initially, it is likely that the "untangled" version will have the
   same implementation as the current version, so that the integrity of
   the split sources can be checked.  Down the track, my hope is to
   change some information that is curently communicated by #ifdefs
   and/or separate variables into a larger, more explicit set of lexical
   tokens, so that the parser, and the following DFA compilation and
   analysis, are simplified by the improved token set, and the extra
   information captured in the tokens enables more sophisticated
   analysis of the pattern, leading to better search engine choices.

   At the time of writing (17 Feb 2014), my current guess is that tokens
   will be guaranteed to be at least 32 bits wide, the lowest bits (??
   probably 8 bits) will always contain the token type (opcode), and
   the higher bits can optionally contain information relating to the
   token (?? 24-bit index pointer for character sets/multibyte charsets;
   24-bit value for a multibyte character; an 8-bit value for an ASCII
   character with an extra bit (C locale)).  Existing (sometimes
   implied) opcodes will be expanded into families to explicitly name
   parameter interpretation, e.g. CHAR_EXACT, CHAR_CASE_INSENSITIVE,
   CHAR_CASE_DONT_CARE, MBCHAR_EXACT, MBCHAR_CASE_INSENSITIVE,
   MBCHAR_CASE_DONT_CARE, etc.  I'm not sure if CLASS and CLASS_INVERT
   opcodes are warranted, but it's possible:  The end-of-line character
   processing is perhaps best deferred to the parse stage in dfa.c.
   (?? Memo to self:  Perhaps later on, split out parser to
   fsaparse.[ch].)

   Incidentally, I use "fsa" rather than "dfa" in the names above, as
   the deterministic/nondeterministic implementation of the search
   engine isn't directly relevant during lexing and parsing.  There are
   indirect influences, such as choosing character-by-character
   representation of simple strings in the input pattern, and then
   working later to extract these strings (the "musts") since
   skip-based string searches are very efficient.  The chosen
   representation allows for a simple, uniform way of naming states
   that are alive during the search, and this fits well with the DFA's
   character-by-character search strategy.

   The script is written in Lua (http://www.lua.org), and was developed
   using Lua 5.1.5.  I've also run it successfully with Lua 5.2.3.

   The reasons for performing this refactoring as a Lua script is
   because:
     (a) The refactoring will take a *long* time to get to a coherent
         level, and therefore keeping track of code changes in the
         original is critical.  The script lets me split up the code
         into snippets (segments) in an automated fashion, and verify
         that the original can be fully recreated from the sliced-up
         version.  Each snippet gets an unique name, and this is used
         when specifying modified versions; thus, the output has some
         protection from changes to the input;

     (b) Lua is a small, powerful scripting language influenced by
         Scheme, CLU, Modula-2 and others; its speed is impressive;
         writing readable code is fairly easy, and its reputation and
         sphere of influence is growing;

     (c) Lua has a compact, but very powerful and flexible, string
         manipulation library, which will be useful when wanting to
         perform tricky code edits in an automated fashion; and

     (d) Lua's licence (from 5.0 onwards; I'm using 5.1) is the MIT
         license, which is Open Source-compatible, so, while it isn't
         Free Software, it is compatible for a once-off refactoring
         project.  (This script is licenced as GPL3, but that's an
         independent choice.)

--]]


local io     = require("io")
local os     = require("os")
local string = require("string")
local table  = require("table")

-- Avoid luarocks/luarocks.loader as I'm not sophisticated about handling
-- dual-lua-5.1/5.2 modules.  I use "strictness" to turn unintended
-- globals into errors, as these are almost always typos.  The strictness
-- script (version 0.1.0, obtained from the luarocks main repository) is
-- compatible with both Lua 5.1 and 5.2.

-- require("luarocks.loader")
require("strictness")

------------------------------------------------------------------------------

-- Pander to "make syntax-check"'s expectation that error messages be
-- marked for translation by having _(...) wrapping.  The C sources are
-- already marked correctly; the only problem is that syntax-check also
-- demands that this script conform.  I can't be bothered trying to
-- fine-tune the sources, so just dummy up a lua function to mimic in
-- a trivial fashion the layout that syntax-check demands.

local function _(...)
        return ...
end

------------------------------------------------------------------------------

-- We refer to line numbers as offsets from a section boundary, so that
-- if insertions/deletions occur, we mostly only have to redefine the
-- local line numbers plus the following section numbers, not every line
-- reference throughout the rest of the file.

local SectionInfo
do
        local InitialSource = {Filename = "-none-", PrevLine = -1, }
        SectionInfo = {Title = "(untitled)",
                       Source = InitialSource,
                       LineNr = 0}
end

local function Section(Title, Source, LineNr)
        if type(Source) ~= "table" then
               error(_(":Source must be a file-as-lines table object"))
        end

        -- Append a new section entry to the list of sections
        SectionInfo[#SectionInfo + 1] =
                {Title = Title, Source = Source, LineNr = LineNr}

        -- Perhaps we should check that sections do not overlap and/or
        -- skip lines, but most of our needs are met by the segment tagging
        -- facility anyway, so it's not a priority at the moment.
end

------------------------------------------------------------------------------

-- NewFileTable reads a file, line-by-line, into a table, and returns
-- the table.  Array (sequence 1..n) references are file text entries;
-- Name/Value pairs hold other values (e.g. keys "Filename", "PrevLine")
local function NewFileTable(Filename)
       local f = assert(io.open(Filename))
       local Table = {Filename = Filename, PrevLine = 1}
       for l in f:lines() do
              Table[#Table + 1] = l
       end
       assert(f:close())
       return Table
end

------------------------------------------------------------------------------

-- Read dfa.c, dfa.h and Makefile.am, line-by-line, into tables
local dfac       = NewFileTable("dfa.c")
local dfah       = NewFileTable("dfa.h")
local makefileam = NewFileTable("Makefile.am")

------------------------------------------------------------------------------

local function printf(Format, ...)
        io.write(string.format(Format, ...))
end

------------------------------------------------------------------------------

-- Function to indent text with embedded newlines
local function Indent(Text, Level)
        local Spacing = string.rep(" ", Level)
        return Spacing .. Text:gsub("\n", "\n" .. Spacing)
end

------------------------------------------------------------------------------

-- Helper function to simplify extracting lines (paragraph(s)) as a
-- newline-terminated string.
local function Lines(Table, Start, End)
--        End = End or Start
        return table.concat(Table, "\n", Start, End) .. "\n"
end

------------------------------------------------------------------------------

local Segs = {SegList = {}, SegSeq = {}}

------------------------------------------------------------------------------

-- Function to create a tagged entry with a copy of lines from a source file
function Segs:Tag(Name, Start, End)
        if self.SegList[Name] ~= nil then
                error(_(":Duplicate segment tag: " .. Name))
        end
        End = End or Start

        -- Convert section-offset line numbers into absolute values
        local CurrentSection = SectionInfo[#SectionInfo]
        local Source = CurrentSection.Source
        Start = Start + CurrentSection.LineNr
        End   = End   + CurrentSection.LineNr

        local Entry =
                {Name    = Name,
                 Source  = Source,
                 Start   = Start,
                 End     = End,
                 RawText = Lines(Source, Start, End),
                 Section = CurrentSection,
        }

        -- Assume that tags are issued in ascending-line-number sequence,
        -- and so keep track of intervening text between tagged sections.
        -- (These should *only* be blank lines; anything else is an error.)
        if Source.PrevLine < Start then
                Entry.PrevText = Lines(Source, Source.PrevLine, Start - 1)
        end
        Source.PrevLine = End + 1

        -- Record new entry, both by name and by sequence
        self.SegList[Name] = Entry
        self.SegSeq[#self.SegSeq + 1] = Entry

        -- Get *really* stroppy about horizontal tabs in the source
        if Entry.RawText:match("\t") then
                print("Warning: Segment " ..
                         Entry.Name .." contains tabs.")
        end
end

------------------------------------------------------------------------------

--[[  -- DEBUG: Omitted for now, but it's very handy to have when merging
-- git master changes with local branch changes.

-- Print out all the segments extracted (or manufactured) so far
for _, Seg in ipairs(Segs.SegSeq) do
        printf("%s:\n%s\n", Seg.Name, Indent(Seg.RawText, 8))
end
--]]

-- [[ ]] -- Emacs pretty-printing get confused if this line isn't present.

------------------------------------------------------------------------------

-- Check that no important text (anything other than newlines) has been
-- left untagged (it is captured by Tag's PrevText facility).
for _, Seg in ipairs(Segs.SegSeq) do
        local PrevText = Seg.PrevText
        if PrevText and not PrevText:match("^\n*$") then
                -- We've found some untagged text
                printf("ERROR: %s:%d:%q has preceding omitted text: \n%s\n",
                       Seg.Source.Filename, Seg.Start,
                       Seg.Name, Indent(PrevText, 8))
        end
end

------------------------------------------------------------------------------

--- ReconstructFile -- Reconstruct file from tagged fragments
-- This function is a sanity check that we can faithfully re-assemble the
-- original source file from the broken-down pieces.  If not, any further
-- processing has a higher probability of having defects.
local function ReconstructFile(Source)
        -- Form a simple filename for the reconstruction
        local ReconsFile = "reconstructed-" .. Source.Filename
        local f = assert(io.open(ReconsFile, "w"))

        -- Search for each snippet from the specified source, and write it
        -- (plus any preceding text) to the file.
        for _, Seg in ipairs(Segs.SegSeq) do
                if Seg.Source == Source then
                        if Seg.PrevText then
                                assert(f:write(Seg.PrevText))
                        end
                        assert(f:write(Seg.RawText))
                end
        end
        assert(f:close())
end

------------------------------------------------------------------------------

--- WriteFile -- Replace file with segments stored internally
-- This is a placeholder function to use when only minor edits to a
-- file are made, and the segment list can still be used in order.
local function WriteFile(Source)
        -- Form a simple filename for the reconstruction
        local f = assert(io.open(Source.Filename, "w"))

        -- Search for each snippet from the specified source, and write it
        -- (plus any preceding text) to the file.
        for _, Seg in ipairs(Segs.SegSeq) do
                if Seg.Source == Source then
                        if Seg.PrevText then
                                assert(f:write(Seg.PrevText))
                        end
                        assert(f:write(Seg.RawText))
                end
        end
        assert(f:close())
end

------------------------------------------------------------------------------

--- RawText -- Return raw text associated with named segment
-- @param Tag -- Tag given to segment upon creation.
-- Note that this function does not report any whitespace that may
-- precede the segment.
local function RawText(Tag)
        local RawText = Segs.SegList[Tag].RawText
        if not RawText then
                error(_(":RawText: Nonexistent tag requested: " .. Tag))
        end
        return RawText
end

------------------------------------------------------------------------------

--- EditedText -- Obtain raw text body of tag, with applied edits
-- @param TagName -- Tag name of code snippet
-- @param ... -- Pattern/Subst pairs, e.g. "charclass c", "charclass *c"
-- @return Modified function declaration.
-- This function simplifies the code needed to make global
-- search-and-replace changes on source files.  Some care and precision is
-- needed in the source selection to avoid making unintended changes.
local function EditedText(TagName, ...)
        local h = RawText(TagName)

         -- If there are optional parameters, treat them as pattern/subst
         -- pairs, and apply them to the header.
         local args = {...}
         if #args % 2 ~= 0 then
                 error(
   ":EditedText: Pattern supplied without a matching subst item")
         end
         for i = 1, #args, 2 do
                 h = h:gsub(args[i], args[i + 1])
         end
         return h
end

------------------------------------------------------------------------------

--- TextSubst -- Like gsub(), but with no regexp pattern matching
-- This function is intended for rewriting blocks of code, where the original
-- has lots of pattern-magic characters such as "-", "(", "%" etc.  We use
-- string.find() as it has a "plain text" search option.
local function TextSubst(Original, Search, Replacement)
        local Pos = 1
        local Text = Original
        local Found = false
        while true do
           -- Find the start and ending indices of Search in the file
           local Start, End = Text:find(Search, Pos, true);
--           print("Start, End, Pos, Search: ", Start, End, Pos, Search)
           if not Start then
              break
           end
           Found = true

           -- Splice the replacement in, in place of the search text
           Text = Text:sub(1, Start - 1) .. Replacement
                  .. Text:sub(End + 1)
           Pos = Start + #Replacement
        end

        assert(Found, "TextSubst expected at least 1 match: " .. Search)

        return Text
end

------------------------------------------------------------------------------

--- WriteExternDecl -- Write function declaration to given file.
-- @param File File (stream) object to receive edited declaration.
-- @Param Declaration Function declaration (with no preceding lifetime
--        specifier, and also without a trailing semicolon).
-- Declarations for globally-visible (interface) functions need to be
-- emitted at least twice:  Once in the header file, and again in the
-- body of the implementing module.  This function edits the text of the
-- declaration to conform to the requirementes of the header file,
-- namely, a preceding "extern" keyword, plus a trailing semicolon before
-- the final newline.  The edited text is then written to the given file,
-- along with the usual assert error checking.
local function WriteExternDecl(File, Declaration)
        local Comment, Decl
        Comment, Decl = Declaration:match("(/%*.-%*/\n+)(.*)")
        if not Decl then
                Comment = ""
                Decl = Declaration
        end

        -- Add "extern" before the text of the declaration
        Decl = Decl:gsub("^", "extern ")

       -- Insert a semicolon before the final newline of the declaration.
       Decl = Decl:gsub("\n%s*$", ";\n")

       -- Finally, write the result to the file, with error checking
       assert(File:write(Comment, Decl, "\n"))
end

------------------------------------------------------------------------------

-- Process dfa.h
Section("dfa.h header", dfah, 0)

Segs:Tag("Description.dfah",               1)
Segs:Tag("Copyright.dfah",                 2)
Segs:Tag("LicenseWarranty.dfah",           4,  17)
Segs:Tag("Authors.dfah",                  19,  19)

Section("dfa.h:includes", dfah, 21)
Segs:Tag("regex.h",                        0)
Segs:Tag("stddef.h",                       1)

Section("struct dfamust defn", dfah, 24)
Segs:Tag("dfamust-struct description",     0,   1)
Segs:Tag("dfamust-struct declaration",     2,   7)

Section("dfa opaque struct", dfah, 33)
Segs:Tag("struct dfa",                     0,   1)

Section("dfa.h:Entry Points", dfah, 36)
Segs:Tag("Entry Points header",            0)
Segs:Tag("dfaalloc description",           2,   4)
Segs:Tag("dfaalloc declaration",           5)
Segs:Tag("dfamusts description",           7)
Segs:Tag("dfamusts declaration",           8)
Segs:Tag("dfasyntax declaration",         10,  13)
Segs:Tag("dfacomp declaration",           15,  18)

Section("dfaexec entry point", dfah, 56)
Segs:Tag("dfaexec description",            0,  11)
Segs:Tag("dfaexec declaration",           12,  13)

Section("Remaining entry points", dfah, 71)
Segs:Tag("dfahint declaration",            0,   9)
Segs:Tag("dfafree declaration",           11,  12)

Section("Specialist entry points", dfah, 85)
Segs:Tag("Entry points for people who know", 0)
Segs:Tag("dfainit declaration",            2,   3)
Segs:Tag("dfaparse declaration",           5,   6)
Segs:Tag("dfaanalyze declaration",         8,  10)
Segs:Tag("dfastate declaration",          12,  14)

Section("dfah Error handling", dfah, 101)
Segs:Tag("Error handling introduction",    0)
Segs:Tag("dfawarn declaration",            2,   6)
Segs:Tag("dfaerror declaration",           8,  11)

Section("expose-using-utf8", dfah, 114)
Segs:Tag("using-utf8-extern",              0)

--[[ --resync--
Section("case-folded-counterparts", dfah, 105)
Segs:Tag("CASE_FOLDED_BUFSIZE defn",       0,   4)
Segs:Tag("case_folded_counterparts decl",  6)
--]]

------------------------------------------------------------------------------

-- Process dfa.c
Section("dfa.c header", dfac, 0)

Segs:Tag("Description.dfac",               1)
Segs:Tag("Copyright.dfac",                 2,   3)
Segs:Tag("LicenseWarranty.dfac",           5,  18)
Segs:Tag("Authors.dfac",                  20,  21)
Segs:Tag("ConfigEnv",                     23)
Segs:Tag("OwnDefs.dfac",                  25)
Segs:Tag("GlobalIncludes",                27,  35)

Section("various-macros-and-helper-fns", dfac, 37)
Segs:Tag("STREQ",                          0)
Segs:Tag("ISASCIIDIGIT",                   2,  10)
Segs:Tag("NaturalLangSupport",            12,  14)

Segs:Tag("dfa-wchar-wctype include",      16,  17)

Segs:Tag("xalloc",                        19) -- malloc+realloc+OOM check fns

-- *Initial part* of code to model character classes as bit vectors in ints
Section("Initial-charclass-setup-code", dfac, 58)
Segs:Tag("NoHPUXBitOps",                   0,   6)
Segs:Tag("CHARBITS-octets",                8,  11) -- Use 8 if not defined
Segs:Tag("NOTCHAR",                       13,  14) -- Should not be here...?
Segs:Tag("INTBITS",                       16,  19)
-- Round up CHARCLASS_INTS to next character after the top bit of the array
Segs:Tag("CHARCLASS_INTS",                21,  22)
Segs:Tag("charclass_typedef",             24,  25)
Segs:Tag("to_uchar_typecheck",            27,  34)

Section("Contexts", dfac, 94)
-- Somewhat hairy context machinery: Initial char classification bits
Segs:Tag("Context_Bitmasks",               0,  15)
-- Context machinery: Constraints to match by (Criteria, PrevCh, CurrCh) specs
Segs:Tag("SUCCEEDS_IN_CONTEXT",           17,  39)
-- Context machinery: Macros to define what a constraint depends upon
Segs:Tag("PREV_*_DEPENDENT",              41,  49)
-- Context machinery: Finally, the constraint magic numbers themselves:
-- Bit 8-11: Valid contexts when next is CTX_NEWLINE;
-- Bit 4-7:  Valid contexts when next is CTX_LETTER; and
-- Bit 0-3:  Valid cintexts when next is CTX_NONE.
-- Note that "CTX_NONE" probably should be called "CTX_OTHERS", and defined
-- after CTX_NEWLINE and CTX_LETTER, but before CTX_ALL
Segs:Tag("Context_MAGIC_NUMS",            51,  61)

Section("Lexical Tokens", dfac, 157)
-- ?? does token need to be ptrdiff_t?  Should we separate comment from code?
Segs:Tag("RegexpTokenType",                0,   4)

-- List of all predefined tokens.  Initially, we merely just grab all the
-- definitions in one slab.  However, the comment states that some values
-- are not returned by the lexical analyzer, so later on, we may move these
-- into a separate block to limit their visibility/scope.
Segs:Tag("PredefinedTokens",               6,  94)

-- Recognizer (token position + constraint) position type (atom), plus
-- the position_set structure to dynamically manage arrays of position atoms.
Segs:Tag("PositionAtom+DynSet",           97, 113)

-- ?? Section defn?

-- The leaf_set type seems out-of-place here: The comment refers to
-- position_set, but the element pointer does not (size_t), and the only use
-- of this type is in dfastate (near line 2500), after lex and parse.
Segs:Tag("leaf_set_typedef",             114, 120)

Section("DFA state", dfac, 279)

-- Define dfa_state type: While the lexer/parser are not DFA/NFA-specific,
-- this type features prominently in producing a dfa-usable description of
-- the search expression.  See especially field "states" of "struct dfa".
Segs:Tag("dfa_state_typedef",              0,  15)
Segs:Tag("state_num_typedef",             16,  19)

Section("classes, alloc, misc. bits", dfac, 300)

-- Struct to handle both char-based and wide-char-based character classes
-- "e.g., [a-c], [[:alpha:]], etc."
-- CHAR_BIT is at minimum 8, but can be more (e.g. 32 on some DSPs).
-- ?? CHAR_BIT <limits.h> versus CHARBITS?
Segs:Tag("mb_char_classes_struct",         0,  17)

-- Main dfa structure -- bit of a dog's breakfast with everything thrown in.
-- Initially, merely capture the whole structure in a single segment, then,
-- as time and understanding advances, break into more focussed smaller segs.
-- 10 Mar 2014: Break out bracket expressions (mbcsets, nmbcsets,
-- mbcsets_alloc).  5 Apr: Add mbrtowc_cache; split up latter lines some more.
Segs:Tag("dfa-struct intro-header",       19,  21)
Segs:Tag("dfa-struct scanner",            22,  25)
Segs:Tag("dfa-struct parser",             27,  38)
Segs:Tag("dfa-struct parser-multibyte",   40,  60)
Segs:Tag("dfa-struct mbrtowc_cache",      62,  66)

Section("dfa struct: bracket/state/parse", dfac, 368)
Segs:Tag("dfa-struct bracket-expressions-array", 0,   3)
Segs:Tag("dfa-struct superset",            5,   6)
Segs:Tag("dfa-struct state-builder",       8,   11)
Segs:Tag("dfa-struct parse->nfa",         13,  27)
Segs:Tag("dfa-struct dfaexec",            29,  51)
Segs:Tag("dfa-struct musts list",         52,  54)

Segs:Tag("dfa-struct mblen/nbmblen",      55,  64)
Segs:Tag("dfa-struct inputwcs/ninputwcs", 65,  72)
Segs:Tag("dfa-struct mbfollows",          73,  74)
Segs:Tag("dfa-struct mb_match_lens",      75,  76)
Segs:Tag("dfa-struct closing syntax",     77)

-- "Some macros for user access to dfa internals. "
Section("dfa helper macros and decls", dfac, 447)
Segs:Tag("dfa-access-macros-cmt",          0)
Segs:Tag("ACCEPTING",                      2,  3)
Segs:Tag("ACCEPTS_IN_CONTEXT",             5,  8)

-- ?? Not sure why forward declarations of dfamust and regexp are here
Segs:Tag("dfamust-forward-def",           10)
Segs:Tag("regexp-forward-def",            11)

-- Redefine xalloc.h XNMALLOC and XCALLOC to *not* typecast the allocated
-- memory, then use these in CALLOC/MALLOC/REALLOC macros.
-- ?? This code is awkward to use; e.g. see fsamusts.c.
Section("Memory allocation helper macros", dfac, 460)
Segs:Tag("typefree-XNMALLOC/XCALLOC",      0,  15)
Segs:Tag("CALLOC",                        17)
Segs:Tag("MALLOC",                        18)
Segs:Tag("REALLOC",                       19)

-- ?? Perhaps change "while (false)" to "while (0)" for consistency?
Segs:Tag("REALLOC_IF_NECESSARY",          21,  32)

-- Code to minimise expensive calls to mbrtowc by FETCH_WC.  This is
-- done by observing how the function reacts to single-octet inputs,
-- and splitting the results into "return octet" and "probe further"
-- sets (with WEOF the sentinel for "probe further", and octet values
-- meaning "return this value").
Section("dfambcache and mbs_to_wchar", dfac, 494)
Segs:Tag("dfambcache",                     0,  19)
Segs:Tag("mbs_to_wchar",                  21,  53)

-- Debug utility prtok, with preprocessor directives separated out
Section("prtok debug", dfac, 549)
Segs:Tag("prtok-ifdef-DEBUG-start",        0)
Segs:Tag("prtok-fn-decl",                  2,   3)
Segs:Tag("prtok-fn-body",                  4,  75)
Segs:Tag("prtok-ifdef-DEBUG-end",         76)


-- Character class facilities: single-bit bit test/set/clear
Section("Character classes-intro and bitops", dfac, 627)
Segs:Tag("charclass-section-introduction", 0)
Segs:Tag("chclass-tstbit-decl",            2,   3)
Segs:Tag("chclass-tstbit-body",            4,   6)
Segs:Tag("chclass-setbit-decl",            8,   9)
Segs:Tag("chclass-setbit-body",           10,  12)
Segs:Tag("chclass-clrbit-decl",           14,  15)
Segs:Tag("chclass-clrbit-body",           16,  18)

-- ... whole-set copy, clear, invert, compare
Section("Character classes-set operations", dfac, 647)
Segs:Tag("chclass-copyset-decl",           0,   1)
Segs:Tag("chclass-copyset-body",           2,   4)
Segs:Tag("chclass-zeroset-decl",           6,   7)
Segs:Tag("chclass-zeroset-body",           8,  10)
Segs:Tag("chclass-notset-decl",           12,  13)
Segs:Tag("chclass-notset-body",           14,  19)
Segs:Tag("chclass-equal-decl",            21,  22)
Segs:Tag("chclass-equal-body",            23,  25)

-- Return unique index representing specified set.  Refer to an existing
-- set if possible, otherwide add this set to the stored list.
Segs:Tag("charclass-index-plus-dfa",      27,  40)

-- Still relying on a static global, only moved a little from prev locn
Segs:Tag("Relocated dfa forward decl",    42,  43)

-- charclass_index rewritten to use dfa_charclass_index
Segs:Tag("redefinedd charclass_index",    45,  50)

-- Variables holding FSA options specified by dfasyntax()
Section("FSA syntax options supplied by dfasyntax()", dfac, 699)
Segs:Tag("misc: syntax_bits_set, syntax_bits, case_fold, eol_byte",
                                           0, 7)

-- Suddenly swith back into character context handling (we now have the
-- definition of eolbyte to use for CTX_NEWLINE)
Section("More character context handling stuff", dfac, 708)
Segs:Tag("context: sbit, letters, newline",   0,   7)
Segs:Tag("IS_WORD_CONSTITUENT",            9,  20)
Segs:Tag("char_context + wchar_ctxt",     22,  40)

-- User function that globally sets DFA options
Section("dfasyntax", dfac, 750)
Segs:Tag("dfasyntax",                      0,  24)

Section("wide-char-setbit fns", dfac, 776)
Segs:Tag("setbit::wchar_t comment",        0,   4)
Segs:Tag("MBS_SUPPORT::setbit_wc",         5,  14)
Segs:Tag("setbit_case_fold_c",            16,  26)

Section("UTF-8 encoding utils", dfac, 806)
Segs:Tag("using_utf8",                     0,  13)

-- Single-byte + ASCII optimisation test fn added (Mar 2014)
Section("Using-simple-locale", dfac, 821)
Segs:Tag("using-simple-locale",            0,  38)

-- Using the term "dross" here comes from the sources -- and it's a good
-- name to wrap up the assortment of static variables defined here.
Section("Lexical Analyzer Dross", dfac, 861)

-- Start by having all vars in a single clump; probably break them apart
-- later when we know more about what we're doing.
Segs:Tag("lex-many-static-vars",           0,  20)
Segs:Tag("FETCH_WC",                      23,  43)

Section("MIN and MAX", dfac, 906)
Segs:Tag("MIN",                            0,   2)
Segs:Tag("MAX",                            3,   5)

-- Handle hairy Unicode non-linear case folding cases
Section("Hairy Unicode case_folded_counterparts", dfac, 913)
Segs:Tag("unicode-lonesome-lower-table",   0,  15)
Segs:Tag("def-CASE_FOLDED_BUFSIZE",       17,  22)
Segs:Tag("case_folded_counterparts-decl", 24,  28)
Segs:Tag("case_folded_counterparts-body", 29,  45)

-- "alpha"->isalpha(), "punct"->ispunct(), "digit"->isdigit() etc. mapping
Section("find-Posix-named-predicate", dfac, 960)
Segs:Tag("predicate-typedef",              0)
Segs:Tag("name-pred-isbyte-atom",          2,  11)
Segs:Tag("prednames-list",                13,  27)
Segs:Tag("find-pred",                     29,  38)

-- NOTE: The comment preceding parse_bracket_exp is misleading: The
-- function handles *both* multibyte-char (produces a struct mb_char_class)
-- and single-byte-char classes (produces a charclass), not just multibyte.

-- For starters, merely copy massive functions verbatim
Section("charclass-parser-and-lexer", dfac, 1000)
Segs:Tag("parse_bracket_exp-decl",         0,   3)
Segs:Tag("parse_bracket_exp-body",         4, 269)
Segs:Tag("lex-decl",                     271, 272)
Segs:Tag("lex-body",                     273, 591)

Section("Recursive-Descent Parser", dfac, 1593)
Segs:Tag("recursive-descent parser intro", 0)
Segs:Tag("lookahead token",                2)
Segs:Tag("deferred-prod-stack-depth",      3,   7)
Segs:Tag("addtok_mb",                      9,  42)
Segs:Tag("addtok_wc fwd decl",            44)
Segs:Tag("addtok",                        46, 100)

Segs:Tag("addtok_wc",                     102, 132)

-- Body is void if MBS_SUPPORT isn't true; this is a simple transformation,
-- and so isn't broken out by our segment tagging at present.
Section("add_utf8_anychar", dfac, 1727)
Segs:Tag("add_utf8_anychar-decl",          0,   1)
Segs:Tag("add_utf8_anychar-body-start",    2)
Segs:Tag("add_utf8_anychar-classes-array", 3,  11)
Segs:Tag("add_utf8_anychar-class-tailor", 13,  27)
Segs:Tag("add_utf8_anychar-description",  29,  38)
Segs:Tag("add_utf8_anychar-add-tokens",   39,  46)
Segs:Tag("add_utf8_anychar-body-end",     47)

Section("Parser", dfac, 1776)
Segs:Tag("Grammar summary comment",        0,  33)
Segs:Tag("atom",                          35,  85)
Segs:Tag("nsubtoks",                      87, 106)
Segs:Tag("copytoks",                     108, 120)

Section("Parser, Part 2", dfac, 1898)
Segs:Tag("closure",                        0, 40)
Segs:Tag("branch",                        42, 51)
Segs:Tag("regexp",                        53, 63)
-- dfaparse: External user's main entry point for the parser
-- Suddenly, the dfa struct is seen more in parameter lists (as "d"),
-- although it's copied to the file-global-scope variable "dfa" here.
Segs:Tag("dfaparse-decl",                 65,  69)
Segs:Tag("dfaparse-body",                 70, 102)

-- ??  Out of FSA territory, and into DFA territory... yet?

Section("dfa: position_set operations", dfac, 2001)
Segs:Tag("position_set intro",             0)
-- Function names "copy", "insert", "merge" and "delete" are too generic
-- for my liking.  However, they give hints to the history of the code:
-- These were likely key parts of the early DFA implementation, and so
-- didn't need highly-qualified names -- later additions needed longer
-- names in order to avoid namespace clashes.  There's also the problem
-- that early compilers only used the first few chars of a name (6?) for
-- externally-visible identifiers, so very-terse names were needed to
-- survive in this environment.
Segs:Tag("copy",                           2,   9)
Segs:Tag("alloc_position_set",            11,  17)
Segs:Tag("insert",                        19,  49)
Segs:Tag("merge",                         51,  74)
Segs:Tag("delete",                        76,  88)
Segs:Tag("state_index",                   90, 149) -- find and/or create item
Segs:Tag("epsclosure",                   151, 214) -- epsilon closure

-- Some more functions to manipulate contexts

Section("Context-fns", dfac, 2217)
Segs:Tag("charclass_context",              0,  21)
Segs:Tag("state_separate_contexts",       23,  44)

Section("dfaanalyze", dfac, 2264)
Segs:Tag("dfaanalyse summary comment",     0,  51)
Segs:Tag("dfaanalyse",                    52, 265)

Section("dfastate", dfac, 2532)
Segs:Tag("dfastate summary comment",       0,  29)
Segs:Tag("dfastate",                      30, 287)
Segs:Tag("build_state",                  289, 362)
Segs:Tag("build_state_zero",             364, 375)

Section("Multibyte fns for dfaexec", dfac, 2909)
Segs:Tag("Multibyte fns section comment",  0)
Segs:Tag("SKIP_REMAINS_MB_IF_INITIAL_STATE", 2,  21)

-- ?? New section: state transition support/execution?
Segs:Tag("realloc_trans_if_necessary",    22,  45)
Segs:Tag("status_transit_state typedef",  47,  54)
Segs:Tag("transit_state_singlebyte",      56,  99)
Segs:Tag("match_anychar",                101, 131)
Segs:Tag("match_mb_charset",             133, 236)

Section("Multibyte fns for dfaexec - part 2", dfac, 3147)
Segs:Tag("check_matching_with_multibyte_ops",   0,  31)
Segs:Tag("transit_state_consume_1char",   33,  86)
Segs:Tag("transit_state",                 88, 164)
-- prepare_wc_buf is empty unless MBS_SUPPORT is true
Segs:Tag("prepare_wc_buf",               167, 196)

Section("dfaexec/init/opt/comp/free", dfac, 3345)
Segs:Tag("dfaexec",                        0, 154)
Segs:Tag("dfahint",                      156, 174)
Segs:Tag("free_mbdata",                  176, 214)
Segs:Tag("dfainit",                      216, 238)
Segs:Tag("dfaoptimize",                  240, 265)

Section("superset, comp, free", dfac, 3612)
Segs:Tag("dfasuperset",                    0,  71)
Segs:Tag("dfacomp",                       73,  86)
Segs:Tag("dfafree",                       88, 135)
-- dfaalloc (approx. line 4106) probably should be here...

-- Knowing must-have strings is highly valuable, as we can use very fast
-- search algorithms (e.g. Boyer-Moore) instead of the slow, grinding
-- character-by-character work of the DFA search engine.  The idea is that
-- the fast search will hopefully eliminate nearly all the lines in the
-- file (buffer) (e.g. possibly 99%), so we can have our cake and eat it:
-- A combination of a very fast search program, together with an expressive
-- search engine that can handle sophisticated constructs such as {n,m}
-- repeat constructs, multibyte characters (including collation classes) in
-- multiple locales, and backreferences.
Section("find-musthave-strings", dfac, 3749)

Segs:Tag("'musts' explanation",            0,  82)
Segs:Tag("icatalloc",                     84,  95)
Segs:Tag("icpyalloc",                     97, 101)
Segs:Tag("istrstr",                      103, 114)
Segs:Tag("freelist",                     116, 128)
Segs:Tag("enlist",                       130, 168)
Segs:Tag("comsubs",                      170, 213)
Segs:Tag("addlists",                     215, 229)
Segs:Tag("inboth",                       231, 264)
Segs:Tag("must typedef",                 266, 272)
Segs:Tag("resetmust",                    274, 279)
Segs:Tag("dfamust declaration",          281, 282)
Segs:Tag("dfamust definition",           283, 516)
-- dfaalloc should be near dfafree (approx. line 3550), as they are a pair
Segs:Tag("dfaalloc",                     518, 522)
Segs:Tag("dfamusts",                     524, 528)

Section("end-configure-vim-attributes", dfac, 4279)

Segs:Tag("vim: set shiftwidth=2",          0)
--]]

------------------------------------------------------------------------------

-- Process Makefile.am
Section("Automake file header", makefileam, 0)
Segs:Tag("automake-process-hint",          1)
Segs:Tag("Copyright.makefileam",           2)
Segs:Tag("Comment-block-spacer-1",         3)
Segs:Tag("LicenseWarranty.makefileam",     4,  15)

-- Define "automake-persistent" shadow versions of build-time macros
Section("am-persistent-macros", makefileam, 17)
Segs:Tag("am:LN",                          0)
Segs:Tag("am:AM_CFLAGS",                   2)
Segs:Tag("am:AM_LDFLAGS",                  4,   5)

Section("am-programs-generated", makefileam, 24)
Segs:Tag("am:bin_PROGRAMS",                0)
Segs:Tag("am:bin_SCRIPTS",                 1)

Section("sources-and-headers", makefileam, 26)
Segs:Tag("grep_SOURCES",                   0,   3)
Segs:Tag("noinst_HEADERS",                 4)

Section("additional link libs", makefileam, 32)
Segs:Tag("LIBINTL documentation",           0,   4)
Segs:Tag("LDADD defn",                      5,   7)
Segs:Tag("grep_LDADD",                      9)
Segs:Tag("am:localedir",                   10)
Segs:Tag("am:AM_CPPFLAGS",                 11)

-- Perhaps CPPFLAGS should be grouped with other Automake overrides (ln 17)?

Section("am:EXTRA_DIST", makefileam, 45)
Segs:Tag("am:dosbuf.c",                     0)

Section("am:egrep fgrep", makefileam, 47)
Segs:Tag("egrep-fgrep scripts",             0, 7)

Section("am: clean files", makefileam, 56)
Segs:Tag("am:CLEANFILES",                   0)


------------------------------------------------------------------------------

--[[

-- DEBUG: Omitted for now

-- Print out all the segments we've extracted (or manufactured) so far
for _, Seg in ipairs(Segs.SegSeq) do
        printf("%s:\n%s\n", Seg.Name, Indent(Seg.RawText, 8))
end

--]]

------------------------------------------------------------------------------

-- Check that no important text (anything other than newlines) has been
-- left untagged (it is captured by Tag's PrevText facility).
for _, Seg in ipairs(Segs.SegSeq) do
        local PrevText = Seg.PrevText
        if PrevText and not PrevText:match("^\n*$") then
                -- We've found some untagged text
                printf("ERROR: %s:%d:%q has preceding omitted text: \n%s\n",
                       Seg.Source.Filename, Seg.Start,
                       Seg.Name, Indent(PrevText, 8))
        end
end

------------------------------------------------------------------------------

-- Check integity of of our deconstruction of each file by reconstituting
-- it from the individual pieces.
ReconstructFile(dfac)
ReconstructFile(dfah)
ReconstructFile(makefileam)

------------------------------------------------------------------------------

-- Time for the rubber to hit the road: Create new files with existing content
-- re-ordered into (hopefully) more coherent groups/modules, and also modify
-- Makefile.am to know about the new sources.  At present, this script doesn't
-- re-run automake after editing Makefile.am; maybe later?


-- Edit function headers/declarations before working on individual files,
-- as we use these public version both in the headers and in the
-- implementations.

local Decls = {}
-- Functions from charclass.[ch]

Decls["tstbit"]  = [[
bool _GL_ATTRIBUTE_PURE
charclass_tstbit (int b, charclass_t const *ccl)
]]
Decls["setbit"]  = [[
void
charclass_setbit (int b, charclass_t *ccl)
]]
Decls["clrbit"]  = [[
void
charclass_clrbit (int b, charclass_t *ccl)
]]
Decls["setbit_range"]  = [[
void
charclass_setbit_range (int start, int end, charclass_t *ccl)
]]
Decls["clrbit_range"]  = [[
void
charclass_clrbit_range (int start, int end, charclass_t *ccl)
]]
Decls["copyset"] = [[
void
charclass_copyset (charclass_t const *src, charclass_t *dst)
]]
Decls["zeroset"] = [[
void
charclass_zeroset (charclass_t *ccl)
]]
Decls["notset"] = [[
void
charclass_notset (charclass_t *ccl)
]]
Decls["equal"] = [[
int _GL_ATTRIBUTE_PURE
charclass_equal (charclass_t const *ccl1, charclass_t const *ccl2)
]]

--[[]] -- ?? Emacs syntax highlighting gets confused sometimes

Decls["unionset"] = [[
void
charclass_unionset (charclass_t const *src, charclass_t *dst)
]]

Decls["intersectset"] = [[
void
charclass_intersectset (charclass_t const *src, charclass_t *dst)
]]

-- Functions from fsatoken.[ch]

Decls["prtok"]   = EditedText("prtok-fn-decl",
                              "^static%s+", "",
                              "prtok %(token t%)",
                                  "fsatoken_prtok (fsatoken_token_t t)")

-- Functions from fsalex.[ch]

Decls["lex-new"] = [[
/* Generate a new instance of an FSA lexer.  */
fsalex_ctxt_t *
fsalex_new (void)
]]

Decls["parse_bracket_exp"] = EditedText("parse_bracket_exp-decl",
                             "static token", "static fsatoken_token_t",
                             "%(void%)", "(fsalex_ctxt_t *lexer)")

-- Redefine "lex" as "fsalex_lex", so we can run it in parallel with the
-- original code in dfa.c.  Also add a (currently-unused) context pointer,
-- (void*, sigh), so that we can have a fully-reentrant lexer sometime.
Decls["lex"] = EditedText("lex-decl",
                  "lex %(void%)", "fsalex_lex (fsalex_ctxt_t *lexer)",
                  "^static%s+token", "fsatoken_token_t")

Decls["lex-pattern"] = [[
/* Receive a pattern string and to reset the lexer state.  */
void
fsalex_pattern (fsalex_ctxt_t *lexer,
                char const *pattern, size_t const pattern_len)
]]

Decls["lex-syntax"] = [[
/* Receive syntax directives, and other pattern interpretation
   instructions such as case folding and end-of-line character.  */
void
fsalex_syntax (fsalex_ctxt_t *lexer,
               reg_syntax_t bits, int fold, unsigned char eol)
]]

Decls["lex-exchange"] = [[
/* Define external function to do non-core data exchanges.
   This function must conform to proto_lexparse_exchange_fn_t.  */
int
fsalex_exchange (fsalex_ctxt_t *lexer,
                 proto_lexparse_opcode_t opcode,
                 void *param)
]]

Decls["lex-fetch-repmn-params"] = [[
/* The REPMN token has two parameters that are held internally.  When
   the client receives a FSALEX_TK_REPMN token, it should immediately
   call this function to retrieve the parameters relating to the token.
   In the future, I would prefer to see these values explicitly
   integrated into the opcode stream, instead of using this avenue.  */
void
fsalex_fetch_repmn_params (fsalex_ctxt_t *lexer,
                           int *p_minrep, int *p_maxrep)
]]

Decls["lex-fetch-wctok"] = [[
/* FSALEX_TK_WCHAR has an implied parameter, stored in wctok.  Provide
   an interface for the client to get this parameter when required.  */
wchar_t _GL_ATTRIBUTE_PURE
fsalex_fetch_wctok (fsalex_ctxt_t *lexer)
]]

Decls["lex-exception-fns"] = [[
/* Receive functions to deal with exceptions detected by the lexer:
   Warnings and errors.  Internally, we add the _Noreturn attribute
   to the error callback, to help the compiler with code flow
   analysis.  */
void
fsalex_exception_fns (fsalex_ctxt_t *lexer,
                      fsalex_warn_callback_fn *warningfn,
                      fsalex_error_callback_fn *errorfn)
]]

Decls["lex-fetch-dotclass"] = [[
/* Fetch_dotclass reports the charclass generated by fsalex to match "." in
   expressions, taking into account locale, eolbyte and Regex flags such as
   whether NUL should be a member of the class.  Others, especially code
   that re-casts UTF-8 coding as a series of character classes, may find
   this class relevant, so this interface lets them access it.  The
   original class is generated when fsalex_syntax() is called.  */
charclass_t const * _GL_ATTRIBUTE_PURE
fsalex_fetch_dotclass (fsalex_ctxt_t *lexer)
]]

-- Functions from fsaparse.[ch]

Decls["parse-new"] = [[
/* Generate a new instance of an FSA parser.  */
fsaparse_ctxt_t *
fsaparse_new (void)
]]

Decls["parse-lexer"] = [[
/* Receiver a lexer function, plus lexer instance context pointer, for use by
   the parser.  Although not needed initially, this plug-in architecture may
   be useful in the future, and it breaks up some of the intricate
   connections that made the original dfa.c code so daunting.  */
void
fsaparse_lexer (fsaparse_ctxt_t *parser,
                void *lexer_context,
                proto_lexparse_lex_fn_t *lex_fn,
                proto_lexparse_exchange_fn_t *lex_exchange_fn)
]]

-- Rewrite header of fsaparse_parse completely here, as splitting the lexer
-- into a separate entity makes much of the header irrelevant.
Decls["parse"] = [[
/* Main entry point for the parser.  Parser is a pointer to a parser
   context struct created by fsaparse_new.  Before calling this function,
   the parser instance must be supplied with a lexer (fsaparse_lexer), and
   also with callback functions to receive warning and error reports
   (fsaparse_esception_fns).  */
void
fsaparse_parse (fsaparse_ctxt_t *parser)
]]

Decls["parse-get-token-list"] = [[
/* After parsing, report a list of tokens describing the pattern.  Complex
   structures such as alternation, backreferences, and locale-induced
   complexity such as variable-length utf8 sequences are described here by
   appending operators that apply to the preceding item(s) (postfix
   notation).  */
void
fsaparse_get_token_list (fsaparse_ctxt_t *parser,
                         size_t *nr_tokens,
                         fsatoken_token_t **token_list)
]]

-- Functions from fsamusts.[ch]

Decls["must"] = [[
/* Receive an existing list (possibly empty) of must-have strings, together
   with a list of the tokens for the current FSA (postfix tree order), and
   if there are any more must-have strings in the token list, add them to
   the must-have list.  Returns the possibly-modified list to the caller.
   Locale and syntax items are partially covered here by the case_fold and
   unibyte_locale flags, but this is incomplete, and should be addressed by
   Stage 2 (improving the expressiveness of tokens).  */
fsamusts_list_element_t *
fsamusts_must (fsamusts_list_element_t *must_list,
              size_t nr_tokens, fsatoken_token_t *token_list,
              bool case_fold, bool unibyte_locale)
]]

----------------******** charclass.h ********----------------

print("Creating charclass.h...")
local f = assert(io.open("charclass.h", "w"))
assert(f:write([[
/* charclass -- Tools to create and manipulate sets of characters (octets)

]]))
assert(f:write(RawText("Copyright.dfac"), "\n"))
assert(f:write(RawText("LicenseWarranty.dfac"), "\n"))
assert(f:write(RawText("Authors.dfac")))
assert(f:write([[

/* 2014: Repackaged by "untangle" script, written by behoffski.  */

/* This module provides services to allocate, manipulate, consolidate and
   discard 256-bit vectors, used to describe 8-bit (octet) sets.  Octet
   is used as the member name here, as "byte" or "char" can sometimes
   refer to different bit sizes (e.g. char -> 6 bits on some IBM/Cyber
   architectures; char -> 32 bits on some DSP architectures; in C,
   sizeof (char) == 1 by definition on all architectures).

   The connection between these "charclass" sets and set expression by
   RE tools can be non-trivial:  Many Unicode characters cannot fit into
   8 bits, and even where octet-based code pages are used, nontrivial
   cases can appear (e.g. Code page 857, MS-DOS Turkish, which has both
   a dotted and a dotless lowercase and uppercase "I").

   On the storage side, things are slightly tricky and perhaps even murky
   at times.  The client starts by allocating a charclass, working on it,
   and then either finalising it (usually) or abandoning it.  The working
   class (pun intended) is represented by a pointer.  If not abandoned,
   this pointer is guaranteed to remain valid for the lifetime of the module.

   The module tries aggressively to eliminate duplicates; this is perhaps the
   main function of the finalise step.  So, the pointer that represents the
   class after finalise may not be the working pointer.

   In addition to the pointer method of referring to a class, the classes
   can be viewed as an array, with the first class receiving index 0, the
   second receiving index 1, and so on.  Functions are provided to map
   pointers to indexes, and vice versa.  The index representation is handy
   as it is very compact (typically much fewer than 24 bits), whereas
   pointers are architecture and OS-specific, and may be 64 bits or more.

   Index 0 is special; it will always represent the zero-class (no members
   set).  Users wanting to store a set of non-zeroclass classes (e.g. utf8)
   can use this property as a sentinel (a value of 0 for a static variable
   can mean "not initialised").

   Finally, there are some "gutter" bits, at least 3 on each end of the
   class, so that, to a limited extent (and especially for the common case
   of EOF == -1), bits can be set and cleared without causing problems,
   and the code does not need to include the overhead of checks for
   out-of-bound bit numbers.  These gutter bits are cleared when the class
   is finalised, so EOF (for instance) should never be member of a class.  */

]]))

-- Add preprocessor lines to make this header file idempotent.
assert(f:write([[

#ifndef CHARCLASS_H
#define CHARCLASS_H 1

/* Always import environment-specific configuration items first. */
#include <config.h>

#include <stdbool.h>
#include <stddef.h>

/* Define charclass as an opaque type.  */
typedef struct charclass_struct charclass_t;

/* Indices to valid charclasses are always positive, but -1 can be used
   as a sentinel in some places.  */
typedef ptrdiff_t charclass_index_t;

/* Entire-module initialisation and destruction functions.  The client
   specifies starting size for the class pool.  Destroy releases all
   resources acquired by this module.  */

extern void
charclass_initialise (size_t initial_pool_size);

extern void
charclass_destroy (void);

/* Single-bit operations (test, set, clear).  */

]]))

-- Write declaration with "extern" preceding and ";" following text.
WriteExternDecl(f, Decls["tstbit"])
WriteExternDecl(f, Decls["setbit"])
WriteExternDecl(f, Decls["clrbit"])

assert(f:write([[
/* Range-of-bits set and clear operations.  These are easier to read, and
   also more efficient, than multiple single-bit calls.  */

]]))

WriteExternDecl(f, Decls["setbit_range"])
WriteExternDecl(f, Decls["clrbit_range"])

assert(f:write([[
/* Whole-of-set operations (copy, zero, invert, compare-equal).  */

]]))

WriteExternDecl(f, Decls["copyset"])
WriteExternDecl(f, Decls["zeroset"])
WriteExternDecl(f, Decls["notset"])
WriteExternDecl(f, Decls["equal"])

assert(f:write([[
/* Add "unionset" and "intersectset" functions since whole-of-class
   operations tend to be reasonably expressive and self-documenting.
   In both cases, the source modifies the destination; ORed in, in the
   case of unionset; ANDed in, in the case of intersectset.  */
]]))
WriteExternDecl(f, Decls["unionset"])
WriteExternDecl(f, Decls["intersectset"])

assert(f:write([[
/* Functions to allocate, finalise and abandon charclasses.  Note that
   the module aggressively tries to reuse existing finalised classes
   rather than create new ones.  The module returns an unique index
   that can be used to reference the module; this index supercedes the
   pointer used during the work phase (if index_to_pointer is called, a
   different class may be returned).

   The aggressive-reuse policy also means that finalised classes must
   not undergo further modification.

   Allocating and then abandoning classes is useful where an operation
   requires temporary classes for a while, but these do not need to be
   maintained once the work is complete.  */

extern charclass_t *
charclass_alloc (void);

extern charclass_index_t
charclass_finalise (charclass_t *ccl);

extern void
charclass_abandon (charclass_t *ccl);

/* Functions to map between pointer references and index references for
   a charclass.  As explained above, the index is convenient as it is
   typically an array reference, and is usually not much larger than the
   number of classes that have been allocated.  */

extern charclass_t * _GL_ATTRIBUTE_PURE
charclass_get_pointer (charclass_index_t const index);

extern charclass_index_t _GL_ATTRIBUTE_PURE
charclass_get_index (charclass_t const *ccl);

/* Return a static string describing a class (Note: not reentrant).  */
extern char *
charclass_describe (charclass_t const *ccl);

]]))

-- Finally, add trailer lines (idempotency, vim)
assert(f:write([[
#endif /* CHARCLASS_H */

/* vim:set shiftwidth=2: */
]]))

assert(f:close())

----------------******** charclass.c ********----------------

print("Creating charclass.c...")
local f = assert(io.open("charclass.c", "w"))
assert(f:write([[
/* charclass -- Tools to create and manipulate sets of C "char"s

This module provides tools to create, modify, store and retrieve character
classes, and provides tools tuned to the needs of RE lexical analysers.

The class itself is an opaque type, referenced by a pointer while under
construction, and later by an unique index when finalised.  The module
tries aggressively to reuse existing finalised classes, rather than create
duplicates.  Functions are provided to map between indexes and pointers.
Because of the deduplication effort, the index reported for a class upon
finalisation may map to a different pointer than the one supplied by new ().

Classes may be shared between different lexer instances, although, at the
time of writing (10 April 2014) it is not thread-safe.  In many cases,
there might only be one class under construction at any time, with the
effort either finalised or abandoned quickly.  However, this module
recognises that sometimes multiple classes might be worked on in parallel,
and so explicitly marks each allocated class area as one of "unused",
"work" or "finalised".  This marking is done by an array of state bytes
dynamically allocated when the pool is created.

]]))
assert(f:write(RawText("Copyright.dfac"), "\n"))
assert(f:write(RawText("LicenseWarranty.dfac"), "\n"))
assert(f:write(RawText("Authors.dfac")))
assert(f:write([[

/* 2014: Repackaged by "untangle" script, written by behoffski.  */

/* Always import environment-specific configuration items first.  */
#include <config.h>

#include <assert.h>
#include "charclass.h"
#include <limits.h>
#include <stdbool.h>
#include <stddef.h>
#include <stdlib.h>
#include <stdio.h> /* for EOF assert test.  */
#include <string.h>
#include <wchar.h> /* for WEOF assert test.  */
#include "xalloc.h"

/* Lower bound for size of first pool in the list.  */
/* ?? Set to 2 for pool debug;  Use 10 in production?  */
#define POOL_MINIMUM_INITIAL_SIZE          10

#ifndef MAX
# define MAX(a,b) ((a) > (b) ? (a) : (b))
#endif

#ifndef MIN
# define MIN(a,b) ((a) < (b) ? (a) : (b))
#endif

]]))

assert(f:write([[
/* We maintain a list-of-pools here, choosing to malloc a new slab of
   memory each time we run out, instead of a realloc strategy.  This is so
   that we can provide a guarantee to the user that any class pointer issued
   remains valid for the lifetime of the module.  */

typedef ptrdiff_t pool_list_index_t;

/* Designator for each charclass in each pool.  Note that enums are ints by
   default, but we use a single unsigned char per class in our explicit
   memory allocation.  */
typedef enum
{
  STATE_UNUSED = 0,
  STATE_WORKING = 1,
  STATE_FINALISED = 2
} charclass_state_t;

typedef struct pool_info_struct {
  charclass_index_t first_index;
  size_t alloc;      /* ?? Use pool_list_index_t type for these?  */
  size_t used;
  charclass_t *classes;

  /* Charclass designator byte array, one per item, allocated dynamically.  */
  unsigned char *class_state;
} pool_t;

static pool_list_index_t pool_list_used  = 0;
static pool_list_index_t pool_list_alloc = 0;
static pool_t *pool_list = NULL;

/* While the header only guarantees a 3-bit gutter at each end of each
   class, we use an entire integer (typically 32 bits) for the gutter,
   with 1 integer placed at the start of each pool, 1 integer as a
   shared gutter between each class, and 1 integer after the last
   class.  This is why there is "(*int) + 1" code after class memory
   alloation calls.  */

]]))

-- Define CHARBITS, INTBITS, NOTCHAR and CHARCLASS_INTS used to scope out
-- size (in integers) of bit-vector array.
assert(f:write(RawText("NoHPUXBitOps"), "\n"))
assert(f:write(RawText("CHARBITS-octets"), "\n"))
assert(f:write(RawText("INTBITS"), "\n"))
assert(f:write(RawText("NOTCHAR"), "\n"))
assert(f:write(RawText("CHARCLASS_INTS"), "\n"))

assert(f:write([[
/* Flesh out opaque charclass type given in the header  */
/* The gutter integer following the class member storage also serves as the
   gutter integer before the next class in the list.

   Note that since the "gutter" notion explicitly includes negative values,
   members need to be signed ints, not unsigned ints, so that arithmetic
   shift right can be used (e.g. -8 >> 8 == -1, not -8 / 256 == 0).  */

struct charclass_struct {
   int members[CHARCLASS_INTS];
   int gutter_following;
};

]]))
-- Define public functions.  We've made charclass an opaque class, so
-- need to edit each body to change parameter "c" to "*c", and in body
-- change "c" to "c->members".

assert(f:write([[
/* Define class bit operations: test, set and clear a bit.

   Grrrr.  I wanted to exploit arithmetic right shift to come up with a
   really cheap and neat way of reducing small negative bit values,
   especially if b == EOF ==-1, to an index of -1 that falls neatly
   into the gutter, but strict C conformance does not guarantee this.
   The code below handles the two most likely scenarios, but, as with
   anything that is undefined, this is playing with fire.  */

#if INT_MAX == 32767
# define INT_BITS_LOG2 4        /* log2(sizeof(int)) + log2(CHARBITS) */
#elif INT_MAX == 2147483647
# define INT_BITS_LOG2 5        /* log2(sizeof(int)) + log2(CHARBITS) */
#else
# error "Not implemented: Architectures with ints other than 16 or 32 bits"
#endif

#if ((~0 >> 1) < 0)
  /* Arithmetic shift right: Both signed and unsigned cases are ok.  */
# define ARITH_SHIFT_R_INT(b) ((b) >> INT_BITS_LOG2)
#else
  /* Avoid using right shift if b is negative.  The macro may evaluate b twice
     in some circumstances.  */
# define ARITH_SHIFT_R_INT(b) \
      (((b) < 0) ? -1 : ((b) >> INT_BITS_LOG2))
#endif

]]))
assert(f:write(Decls["tstbit"]))
local body = EditedText("chclass-tstbit-body",
               "c%[b / INTBITS%]", "ccl->members[ARITH_SHIFT_R_INT(b)]")
assert(f:write(body, "\n"))
assert(f:write(Decls["setbit"]))
local body = EditedText("chclass-setbit-body",
               "c%[b / INTBITS%]", "ccl->members[ARITH_SHIFT_R_INT(b)]")
assert(f:write(body, "\n"))
assert(f:write(Decls["clrbit"]))
local body = EditedText("chclass-clrbit-body",
               "c%[b / INTBITS%]", "ccl->members[ARITH_SHIFT_R_INT(b)]")
assert(f:write(body, "\n"))

-- Add "setbit_range" and "clrbit_range", mainly as it allows the client
-- to write significantly cleaner code in some cases (utf8), but also because
-- this code is (initially) modestly more efficient.  (I could implement
-- bitmasks here to improve efficiency much more, but there are so many
-- things to do that I'm skipping it for now.)
assert(f:write(Decls["setbit_range"], "\n"))
assert(f:write([[
{
  int bit;

  /* Do nothing if the range doesn't make sense.  */
  if (end < start)
    return;
  if (start >= NOTCHAR)
    return;

  /* Clip the range to be in the interval [-1..NOTCHAR - 1] */
  start = MAX(start, -1);
  end   = MAX(end,   -1);
  /* We know start is < NOTCHAR from the test above.  */
  end   = MIN(end,   NOTCHAR - 1);

  /* ?? Could check that ccl is a valid class, but not at present.  */

  /* Okay, loop through the range, bit-by-bit, setting members.  */
  for (bit = start; bit <= end; bit++)
    ccl->members[ARITH_SHIFT_R_INT(bit)] |= 1U << bit % INTBITS;
}

]]))

assert(f:write(Decls["clrbit_range"], "\n"))
assert(f:write([[
{
  int bit;

  /* Do nothing if the range doesn't make sense.  */
  if (end < start)
    return;
  if (start >= NOTCHAR)
    return;

  /* Clip the range to be in the interval [-1..NOTCHAR - 1] */
  start = MAX(start, -1);
  end   = MAX(end,   -1);
  /* We know start is < NOTCHAR from the test above.  */
  end   = MIN(end,   NOTCHAR - 1);

  /* ?? Could check that ccl is a valid class, but not at present.  */

  /* Okay, loop through the range, bit-by-bit, clearing members.  */
  for (bit = start; bit <= end; bit++)
    ccl->members[ARITH_SHIFT_R_INT(bit)] &= ~(1U << bit % INTBITS);
}

]]))

assert(f:write([[
/* Define whole-set operations: Copy, clear, invert, compare and union  */

]]))

assert(f:write(Decls["copyset"]))
local body = EditedText("chclass-copyset-body",
                        "\n  memcpy .-\n",
        "\n  memcpy (dst->members, src->members, sizeof(src->members));\n")
assert(f:write(body, "\n"))
assert(f:write(Decls["zeroset"]))
local body = EditedText("chclass-zeroset-body",
                  "\n  memset .-\n",
                  "\n  memset (ccl->members, 0, sizeof(ccl->members));\n")
assert(f:write(body, "\n"))
assert(f:write(Decls["notset"]))
local body = EditedText("chclass-notset-body",
                        "    s%[i%] = ~s%[i%]",
                        "    ccl->members[i] = ~ccl->members[i]")
assert(f:write(body, "\n"))
assert(f:write(Decls["equal"]))
local body = EditedText("chclass-equal-body",
                        "\n  return .-\n",
                        "\n  return memcmp (ccl1->members, ccl2->members,\n"
          .. "       sizeof(ccl1->members)) == 0;\n")
assert(f:write(body, "\n"))

-- Add "unionset" and "intersectset" functions so we can use classes more
-- expressively and directly.
assert(f:write(Decls["unionset"]))
assert(f:write([[
{
  int i;

  for (i = 0; i < CHARCLASS_INTS; ++i)
    dst->members[i] |= src->members[i];
}

]]))
assert(f:write(Decls["intersectset"]))
assert(f:write([[
{
  int i;

  for (i = 0; i < CHARCLASS_INTS; ++i)
    dst->members[i] &= src->members[i];
}

]]))

-- Rewrite charclass storage handling.  The original code relating to this
-- starts at about line 595 in dfa.c, but this code is sufficiently
-- different that I'm writing it from scratch.
assert(f:write([[
/* #ifdef DEBUG */

/* Nybble (4bit)-to-char conversion array for little-bit-endian nybbles.  */
static const char *disp_nybble = "084c2a6e195d3b7f";

/* Return a static string describing a class (Note: not reentrant).  */
char *
charclass_describe (charclass_t const *ccl)
{
  /* The string should probably be less than 90 chars, but overcompensate
     for limited uncertainty introduced by the %p formatting operator.  */
  static char buf[256];
  char *p_buf = buf;
  int i;

  p_buf += sprintf (p_buf, "0x%08lx:", (unsigned long) ccl);
  for (i = 0; i < CHARCLASS_INTS; i += 2)
    {
      int j = ccl->members[i];
      *p_buf++ = ' ';
      *p_buf++ = disp_nybble[(j >>  0) & 0x0f];
      *p_buf++ = disp_nybble[(j >>  4) & 0x0f];
      *p_buf++ = disp_nybble[(j >>  8) & 0x0f];
      *p_buf++ = disp_nybble[(j >> 12) & 0x0f];
      *p_buf++ = disp_nybble[(j >> 16) & 0x0f];
      *p_buf++ = disp_nybble[(j >> 20) & 0x0f];
      *p_buf++ = disp_nybble[(j >> 24) & 0x0f];
      *p_buf++ = disp_nybble[(j >> 28) & 0x0f];

      j = ccl->members[i + 1];
      *p_buf++ = disp_nybble[(j >>  0) & 0x0f];
      *p_buf++ = disp_nybble[(j >>  4) & 0x0f];
      *p_buf++ = disp_nybble[(j >>  8) & 0x0f];
      *p_buf++ = disp_nybble[(j >> 12) & 0x0f];
      *p_buf++ = disp_nybble[(j >> 16) & 0x0f];
      *p_buf++ = disp_nybble[(j >> 20) & 0x0f];
      *p_buf++ = disp_nybble[(j >> 24) & 0x0f];
      *p_buf++ = disp_nybble[(j >> 28) & 0x0f];
    }
  *p_buf++ = '\0';
  return buf;
}

/* static */ void
debug_pools (const char *label, bool class_contents)
{
  pool_list_index_t pool_nr;
  size_t class_nr;

  printf ("\nPool %p debug(%s): [alloc, used: %ld %ld]\n",
          pool_list, label, pool_list_alloc, pool_list_used);
  for (pool_nr = 0; pool_nr < pool_list_used; pool_nr++)
    {
      pool_t *pool = &pool_list[pool_nr];
      printf (
" %3ld: first_index, alloc, used, classes: %4ld %3lu %3lu %p\n",
pool_nr, pool->first_index, pool->alloc, pool->used, pool->classes);
      printf ("     class_states: ");
      for (class_nr = 0; class_nr < pool->alloc; class_nr++)
        switch (pool->class_state[class_nr]) {
          case STATE_UNUSED:    putchar ('.'); break;
          case STATE_WORKING:   putchar ('w'); break;
          case STATE_FINALISED: putchar ('F'); break;
          default: printf ("?%02x", pool->class_state[class_nr]);
        }
      putchar ('\n');
    }

  /* If class contents requested, print them out as well.  */
  if (class_contents)
    for (pool_nr = 0; pool_nr < pool_list_used; pool_nr++)
      {
        pool_t *pool = &pool_list[pool_nr];
        for (class_nr = 0; class_nr < pool->used; class_nr++)
          printf ("%s\n",
                  charclass_describe (&pool->classes[class_nr]));
      }
}

/* #endif * DEBUG */

static pool_t *
add_new_pool (void)
{
  pool_t *prev, *pool;
  size_t pool_class_alloc;
  charclass_t *alloc_mem;

  /* If the pools list is full, use x2nrealloc to expand its size.  */
  if (pool_list_used == pool_list_alloc)
      pool_list = x2nrealloc (pool_list, &pool_list_alloc, sizeof (pool_t));

  /* Find the size of the last charclass pool in the (old) list.  Scale up
     the size so that malloc activity will decrease as the number of pools
     increases.  Also, add 1 here as we knock off 1 to use as a gutter
     later.  */
  prev = &pool_list[pool_list_used - 1];
  pool_class_alloc = (prev->alloc * 5 / 2) + 1;
  alloc_mem = XNMALLOC (pool_class_alloc, charclass_t);

  /* Set up the new pool, shifting the alloc pointer to create the gutter
     preceding the first class of the pool.  */
  pool = &pool_list[pool_list_used++];
  pool->classes = alloc_mem + 1;
  pool->first_index = prev->first_index + prev->alloc;
  pool->alloc = pool_class_alloc - 1;
  pool->used = 0;
  pool->class_state = xzalloc (pool->alloc);

  return pool;
}

charclass_t *
charclass_alloc (void)
{
  pool_list_index_t pool_nr;
  charclass_t *class;
  pool_t *pool = NULL;
  size_t class_nr;
  size_t class_last_nr;
  int *gutter_preceding;

  /* Locate a pool with unused entries (if any).  */
  for (pool_nr = 0; pool_nr < pool_list_used; pool_nr++)
    {
      pool = &pool_list[pool_nr];

      /* Try use the earliest pool possible, first by filling in a hole
         from a withdrawn class, or by grabbing an unused class from the
         end of the list.  */
      class_last_nr = MIN(pool->used + 1, pool->alloc);
      for (class_nr = 0; class_nr < class_last_nr; class_nr++)
       {
          if (pool->class_state[class_nr] == STATE_UNUSED)
            goto found_pool_and_class;
       }
    }

  /* No space found, so prepare a new pool and make this class its first
     element.  */
  pool = add_new_pool ();
  class_nr = 0;
  /* FALLTHROUGH */

found_pool_and_class:
  /* Mark the found class state as working, zero its elements, and return
       class pointer to caller.  Zeroing is needed as this class may have
       been previously worked on, but then abandoned or withdrawn.  */
  pool->class_state[class_nr] = STATE_WORKING;
  if (class_nr >= pool->used)
    pool->used = class_nr + 1;
  class = &pool->classes[class_nr];

  /* Zero out the class' members, and also the gutters on each side.  */
  memset (class, 0, sizeof (*class));
  gutter_preceding = ((int *) class) - 1;
  *gutter_preceding = 0;

  return class;
}

pool_t * _GL_ATTRIBUTE_PURE
find_class_pool (charclass_t const *ccl)
{
  pool_list_index_t pool_nr;
  pool_t *pool = NULL;
  ptrdiff_t class_ptr_offset;

  /* Locate the pool whose memory address space covers this class.  */
  /* ?? Perhaps check &pool->classes[pool->alloc] in this first loop, and
     then check that the index is in the "used" portion later, so we can
     diagnose malformed pointers more exactly.  */
  for (pool_nr = 0; pool_nr < pool_list_used; pool_nr++)
    {
      pool = &pool_list[pool_nr];
      if ((pool->classes <= ccl) && (ccl < &pool->classes[pool->alloc]))
        goto found_pool;
    }

  /* No credible pool candidate was found.  */
  assert ("find_class_pool: no pool found");
  return NULL;

found_pool:
  /* Make sure the class clearly lies on an array boundary within the pool's
     memory allocation.  */
  class_ptr_offset = (char *) ccl - (char *) pool->classes;
  if ((class_ptr_offset % sizeof (charclass_t)) != 0)
    {
      /* Pointer does not lie at the start of a pool member.  */
      assert ("find_class_pool: pointer not aligned.");
      return NULL;
    }

  return pool;
}

static void
withdraw_class (charclass_t *ccl, pool_t *class_pool)
{
  pool_t *pool;
  size_t class_nr;
  int *gutter_preceding;

  /* Use pool reference if given, otherwise work back from the class pointer
     to find the associated pool.  */
  pool = (class_pool != NULL) ? class_pool : find_class_pool (ccl);

  if (pool == NULL)
    assert (!"Could not locate a pool for this charclass");

  /* Zero out the gutters each side of the class.  */
  ccl->gutter_following = 0;
  gutter_preceding = ((int *) ccl) - 1;
  *gutter_preceding = 0;

  /* Work out the class index in the pool.  */
  class_nr = ccl - pool->classes;
  pool->class_state[class_nr] = STATE_UNUSED;

  /* Is this the last item within the pool's class list? */
  if (class_nr == pool->used - 1)
    {
      /* Yes, reduce the pool member count by 1.  */
      pool->used--;
      return;
    }
}

/* Finish off creating a class, and report an index that can be used
   to reference the class.  */
charclass_index_t
charclass_finalise (charclass_t *ccl)
{
  int *gutter_preceding;
  pool_list_index_t pool_nr;
  pool_t *pool;
  charclass_t *found = NULL;
  size_t class_nr;
  pool_t *my_pool = NULL;
  size_t my_class_nr = 0;

  /* Search all pools for a finalised class matching this class, and, if found,
     use it in preference to the new one.  While searching, also record where
     the work class is located.  If we can't find ourselves, the pointer is
     invalid, and throw an assertion.   */
  for (pool_nr = 0; pool_nr < pool_list_used; pool_nr++)
    {
      pool = &pool_list[pool_nr];
      for (class_nr = 0; class_nr < pool->used; class_nr++)
        {
          charclass_t *search = &pool->classes[class_nr];
          /* Have we found ourselves in the list? */
          if (search == ccl)
            {
              /* Yes, remember this place in case no duplicate is found.  */
              my_pool = pool;
              my_class_nr = class_nr;
          }
          if (pool->class_state[class_nr] != STATE_FINALISED)
            continue;
          if (charclass_equal (search, ccl))
            {
              /* Another class, finalised, matches:  Use it in preference to
                 potentially creating a duplicate.  */
              withdraw_class (ccl, my_pool);
              found = search;
              goto found_matching_class;
            }
        }
    }

  /* No duplicate found... but make sure the search pointer is known. */
  assert (my_pool != NULL);
  assert (my_pool->class_state[my_class_nr] == STATE_WORKING);

  /* Prepare to convert the search (work) class into a finalised class.  */
  pool = my_pool;
  class_nr = my_class_nr;
  found = &pool->classes[class_nr];
  /* FALLTHROUGH */

found_matching_class:
  /* Clear out the gutter integers each side of the class entry.  */
  gutter_preceding = found->members - 1;
  *gutter_preceding = 0;
  found->gutter_following = 0;
  pool->class_state[class_nr] = STATE_FINALISED;

  /* Return the index of the class.  */
  return pool->first_index + class_nr;
}

void
charclass_abandon (charclass_t *ccl)
{
  withdraw_class (ccl, NULL);
}

/* Additional functions to help clients work with classes.  */

charclass_t * _GL_ATTRIBUTE_PURE
charclass_get_pointer (charclass_index_t const index)
{
  pool_list_index_t pool_nr;
  pool_t *pool;

  /* Does this class match any class we've seen previously? */
  for (pool_nr = 0; pool_nr < pool_list_used; pool_nr++)
    {
      /* Is the index inside this pool? */
      pool = &pool_list[pool_nr];
      if (pool->first_index <= index
              && index < (pool->first_index + pool->used))
        {
          /* Yes, find the pointer within the pool and return it.  */
          return &pool->classes[index - pool->first_index];
        }
    }

  /* The mapping above should never fail; we could return NULL, but we
     choose to abort instead.  */
  assert (!"index-to-charclass mapping failed");
  return NULL;
}

charclass_index_t _GL_ATTRIBUTE_PURE
charclass_get_index (charclass_t const *ccl)
{
  pool_t *pool;

  /* This code is similar to charclass_finalise... perhaps merge? */
  pool = find_class_pool (ccl);
  if (pool == NULL)
    return -1;

  /* Report the index to the caller.  */
  return pool->first_index + (ccl - pool->classes);
}

/* Functions to initialise module on startup, and to shut down and
   release acquired resources at exit.  */

void
charclass_initialise (size_t initial_pool_size)
{
  size_t initial_alloc;
  charclass_t *alloc_mem;
  pool_t *pool;
  charclass_t *ccl;
  charclass_index_t zeroclass_index;

  /* Usually EOF = WEOF = -1, but the standard merely states that they must
     be a negative integer.  We test for -1 here as it's a prime target for
     a "permitted" gutter value, and different values might be a problem.  */
  assert (EOF == -1);
  assert (WEOF == -1);

  /* First, set up the list-of-pools structure with initial storage.  */
  pool_list_alloc = 4;
  pool_list = (pool_t *) xnmalloc (pool_list_alloc, sizeof (pool_t));

  /* If initial pool size is small, inflate it here as we prefer to waste
     a little memory, rather than issue many calls to xmalloc ().  This
     minimum also ensures that our double-up pool size strategy has a sane
     starting point.  */
  initial_alloc = MAX(initial_pool_size, POOL_MINIMUM_INITIAL_SIZE);

  /* Set up the first pool using our chosen first alloc size.  Allocate an
     extra class, and offset the pool by this amount, in order to accommodate
     the initial gutter integer.  (Note for the future:  If charclass
     alignment becomes significant, then sizeof (charclass) and this offset
     may need to be changed, perhaps for SIMD instructions.)  */
  pool_list_used = 1;
  pool = &pool_list[0];
  pool->first_index = 0;
  pool->alloc = initial_alloc;
  pool->used = 0;
  alloc_mem = XNMALLOC (pool->alloc + 1, charclass_t);
  pool->classes = alloc_mem + 1;
  pool->class_state = xzalloc (pool->alloc);

  /* Enforce the all-zeroes class to be the first class.  This is needed as
     "abandon" may leave a hole in a pool in some cases, and in these cases
     we need to ensure that no-one else picks it up by accident (as this
     would invalidate the guarantee that the module eliminates all
     duplicates, from the point of view of the user).  So, we set the first
     class to all-zeroes, and also zero out abandoned classes where a hole
     is unavoidable.  */
  ccl = charclass_alloc (); /* Alloc delivers an all-zeroes class.  */
  zeroclass_index = charclass_finalise (ccl);
  assert (zeroclass_index == 0);

/* debug_pools ("add_new_pool: zeroclass added"); */

}

void
charclass_destroy (void)
{
  int i;
  int *alloc_mem;

  /* First, discard the charclass memory associated with each pool,
     including catering for the offset used upon creation.  */
  for (i = 0; i < pool_list_used; i++)
    {
      alloc_mem = (int *) pool_list[i].classes;
      free (alloc_mem - 1);
    }

  /* Second, free up the pool list itself.  */
  free (pool_list);
}

]]))

-- Finally, add trailer line (vim)
assert(f:write([[
/* vim:set shiftwidth=2: */
]]))

assert(f:close())

------------------------------------------------------------------------------

--- FSATokenSubst -- Substitute bare token names with module-prefixed names
-- Knowledge of the original name, and also all-caps versus all-lowercase
-- variants, are collected here, as multiple parties want to convert
-- tokens in their text.
-- @parmeter Original -- Source code to be modified
-- @return Code with keywords/names qualified with a prefix as appropriate

local function FSATokenSubst(Original)
   local Modified = Original

--[[
   -- Do not touch "token" as it is used in many comments etc; this forces
   -- callers to do the replacement on a case-by-case basis, sigh.

   -- Handle lower-case words first.
   for _, LowercaseKeyword in ipairs{"token"} do
      local Search = "%f[%w_]" .. LowercaseKeyword .. "%f[%W]"
      local Replace = "fsatoken_" .. LowercaseKeyword
      Modified = Modified:gsub(Search, Replace)
   end
--]]

   -- Handle macro/uppercase names that are not tokens (prefix FSATOKEN_)
   for _, UppercaseKeyword in ipairs{"NOTCHAR"} do
      local Search = "%f[_%w]" .. UppercaseKeyword .. "%f[%W]"
      local Replace = "FSATOKEN_" .. UppercaseKeyword
      Modified = Modified:gsub(Search, Replace)
   end

   -- Finally, handle token values (prefix with FSATOKEN_TK_)
   for _, UppercaseToken in ipairs{
          "END", "EMPTY", "BACKREF", "BEGLINE", "ENDLINE",
          "BEGWORD", "ENDWORD", "LIMWORD", "NOTLIMWORD", "QMARK",
          "STAR", "PLUS", "REPMN", "CAT", "OR", "LPAREN", "RPAREN",
          "ANYCHAR", "MBCSET", "WCHAR", "CSET"} do
      local Search = "%f[%w_]" .. UppercaseToken .. "%f[%W]"
      local Replace = "FSATOKEN_TK_" .. UppercaseToken
      Modified = Modified:gsub(Search, Replace)
   end

   -- Note: We do not try to maintain indentation of any comments after
   -- the substituted text.  This is because this prefixing is (at present)
   -- a temporary hack to keep the original code's and FSAToken's code
   -- namespaces entirely separate, so that both can be run side-by-side
   -- and the outputs compared.  Later, if/when the untangled code is
   -- chosen to replace the original, the prefixing can be eliminated,
   -- and/or this function can do more to maintain indentation.

   return Modified
end

------------------------------------------------------------------------------

----------------******** fsatoken.h ********----------------

print("Creating fsatoken.h...")
local f = assert(io.open("fsatoken.h", "w"))
assert(f:write([[
/* fsatoken - Create tokens for a compact, coherent regular expression language

]]))
assert(f:write(RawText("Copyright.dfac"), "\n"))
assert(f:write(RawText("LicenseWarranty.dfac"), "\n"))
assert(f:write(RawText("Authors.dfac")))
assert(f:write([[

/* 2014: Repackaged by "untangle" script, written by behoffski.  */

/* Regular expression patterns are presented as text, possibly ASCII; the
   format is very expressive, but this comes at the cost of being somewhat
   expensive to interpret (including identifying invalid patterns).  By
   tokenising the pattern, we make life much easier for the parser and
   other search machinery that follows.

   This file defines the tokens that we use, both for the benefit of the
   lexer/parser/dfa analyser that share this information, and for other
   machinery (such as the C compiler) that may need to store and/or
   manipulate these items.  */

]]))

-- Add preprocessor lines to make this header file idempotent.
assert(f:write([[

#ifndef FSATOKEN_H
#define FSATOKEN_H 1

/* Always import environment-specific configuration items first.  */
#include <config.h>

/* Obtain definition of ptrdiff_t from stddef.h  */
#include <stddef.h>

/* C stream octets, and non-stream EOF, are self-representing tokens.
   We need to include stdio.h to obtain the definition of EOF.  */
#include <stdio.h>

]]))

-- Write out token-specific code extracted from dfa.c
assert(f:write(RawText("CHARBITS-octets"), "\n"))
assert(f:write(FSATokenSubst(RawText("NOTCHAR")), "\n"))
assert(f:write(EditedText("RegexpTokenType",
                         "typedef ptrdiff_t token",
                         "typedef ptrdiff_t fsatoken_token_t"), "\n"))
assert(f:write(FSATokenSubst(RawText("PredefinedTokens")), "\n"))

-- Define prototypes for functions provided by module body.
assert(f:write([[

/* prtok - Display token name (for debugging) */
#ifdef DEBUG
]]))
WriteExternDecl(f, Decls["prtok"])
assert(f:write([[
#endif /* DEBUG */
]]))

-- Finally, add trailer lines (idempotency, vim)
assert(f:write([[

#endif /* FSATOKEN_H */

/* vim:set shiftwidth=2: */
]]))

assert(f:close())

----------------******** fsatoken.c ********----------------

print("Creating fsatoken.c...")
local f = assert(io.open("fsatoken.c", "w"))
assert(f:write([[
/* fsatoken - Support routines specific to token definitions

]]))
assert(f:write(RawText("Copyright.dfac"), "\n"))
assert(f:write(RawText("LicenseWarranty.dfac"), "\n"))
assert(f:write(RawText("Authors.dfac")))
assert(f:write([[

/* 2014: Repackaged by "untangle" script, written by behoffski.  */

/* The majority of the fsatoken[ch] module is in fsatoken.h, as it is
   shared by other modules.  This file provides token-specific support
   functions, such as functions to print tokens (for debugging).

   Although there is a relationship between some generic constructs
   such as character classes and the CSET token defined here, the generic
   items are defined in a separate support library, not in this module.
   This is because these tokens are very FSA/grep-specific, whereas the
   generic consructs are potentially widely useable, and may even be
   amenable to hardware-specific optimisations (such as superscalar
   opcodes such as: and/or/set/clear/test-and-set/test-and-clear and/or
   bit counting operations).  */

/* Always import environment-specific configuration items first.  */
#include <config.h>

#include "fsatoken.h"
#include <stdio.h>

]]))

--[[ Commented out for now, as unused macros cause an error.
-- Include gettext locale support for strings: _("message")
assert(f:write(RawText("NaturalLangSupport"), "\n"))
--]]

assert(f:write(RawText("prtok-ifdef-DEBUG-start"), "\n"))
assert(f:write(Decls["prtok"]))
assert(f:write(FSATokenSubst(RawText("prtok-fn-body")), "\n"))
assert(f:write(RawText("prtok-ifdef-DEBUG-end")))

-- Finally, add trailer line (vim)
assert(f:write([[
/* vim:set shiftwidth=2: */
]]))

------------------------------------------------------------------------------

----------------******** proto-lexparse.h ********----------------

print("Creating proto-lexparse.h...")
local f = assert(io.open("proto-lexparse.h", "w"))
assert(f:write([[
/* proto-lexparse -- Define how lexer and parser can interact.

]]))

assert(f:write(RawText("Copyright.dfac"), "\n"))
assert(f:write(RawText("LicenseWarranty.dfac"), "\n"))
assert(f:write(RawText("Authors.dfac")))
assert(f:write([[

/* 2014: Created by "untangle" script, written by behoffski.

(A very lengthy editorial/discussion of this module follows, partially
interspersed with a description of its intended function.  Comments and
criticism are welcome; if adopted, I expect that this comment block will be
rewritten to be much more focussed and much shorter.)

This file has been added very, very late in the non-linear process of
developing this code.  It addresses a feature that was not anticipated early
on, but which I believe is worth exploring in the future:  The presence of
pluggable lexers to supply to the parser, or, equivalently, breaking up the
master/slave relationship between the parser and the lexer, and replacing it
with a peer-to-peer conversation (perhaps the software equivalent of a
shared bus).

Quite a bit of the need for this protocol/interface module is that I've
demanded that the individual modules strictly hide their internals, and only
share information and/or control of resources (such as the parser calling
lex ()) in an explicit fashion.  Early on, the nature of connections between
the lexer and the parser appeared with the need to communicate {min,max}
values as part of the REPMN token; then the need to communicate the wide
character implied by the WCHAR token.

The crisis came arose from an early decision that at the time seemed fairly
mild, but gradually grew and grew over time: when I decided to extend the
meaning of the fsalex_syntax () call to not only include the parameters
named in the call (reg_syntax_t, case folding, eol char), but to also
capture the locale in force at that point, and make the lexer's behaviour
obey that locale, even if the locale was subsequently changed.

Another example of data interchange is that the lexer builds structures to
model multibyte character sets, but (at the time or writing) does not provide
an interface for clients to access this information.  The need for many and
varied interface function has been steadily growing, and this is just another
example.

The breaking point came when I saw that the parser needed to know if it was
in a unibyte or multibyte locale (currently done by testing MB_CUR_MAX > 1).
If the lexer was the authority on the locale, then the parser needed to
query the lexer, since, because of the  pluggable-lexer architecture I'd
created, the lexer was already the authority, but needed to share its
information with others.  The number of specialised get/set functions needed
was out of control, and, if a pluggable architecture was to be maintained,the
number had to be drastically reduced.

Hence, this module, that defines the interface/protocol that any lexer must
serve, and which allows a lexer and its client to exchange information and
negotiate in a flexible fashion.  This comes at the extent of type safety in
many cases, but makes data exchange more formal.

This module only models essential exchanges between the parser and lexer
instances, and encourages the user to negotiate directly with the modules
for unrelated topics.  For example, the parser does not need to know what the
original plain-text pattern was; it receives a full description of the
pattern via the lexer tokens and associated parameters/structures.  So, there
is no "pattern" interchange described here:  The client can work directly
with the lexer.

The design is to split interactions between modules into two sets:  For code
that is so central to the design of the modules on each side, where an
indirect approach would tend to significantly complicate the exchange, and/or
where efficiency is a central concern, then a direct function is provided to
facilitate each such possible exchange.

The other set consists of the majority of cases (by type, but usually not by
runtime volume):  For these cases, a generic "exchange" function is provided
by the lexer, and is plugged into the parser.  This function has parameters
and return values that permit the parties to exchange information, although
some strict type safety is sacrificed where memory pointers can cross the
boundary.  A slightly simplified definition of the exchange function is:

    int lexer_provided_exchange(void *lexer_instance_context,
                                exchange_opcode_enum opcode,
                                void *generic_parameter);

The opcode defines what meaning (if any) is assigned to the parameter and/or
to the return value.  The exchange function, these opcodes, and the required
meaning the values involved are formally defined by this module, and both the
lexer and the parser must conform to this protocol/interface.  By having this
module as an independent entity, it reinforces the independence of each party
in the arrangement, and can ease the creation of alternatives (e.g. perhaps
use a much simpler lexer for "grep -F" in a unibyte locale?).

On the downside, some of the initial opcodes/exchanges I'm writing here are
obviously directly lifted from the existing dfa.c lexer/parser interactions
(e.g. the multibyte sets struct(s)).  I'm hoping that this will not be a
fatal flaw, but in the final analysis this module is a side-effect of
requiring a pluggable lexer architecture, and this may not be acceptable to
others.  */

]]))

-- Add preprocessor lines to make this header file idempotent.
assert(f:write([[

#ifndef PROTO_LEXPARSE_H
#define PROTO_LEXPARSE_H 1

/* Always import environment-specific configuration items first.  */
#include <config.h>

/* The lexer returns a token, defined by fsatoken.  */
#include "fsatoken.h"

]]))

assert(f:write([[
/* Define opcodes for lexer/parser exchanges.  */
typedef enum proto_lexparse_opcode_enum
{
  PROTO_LEXPARSE_OP_GET_LOCALE,
  PROTO_LEXPARSE_OP_GET_IS_MULTIBYTE_ENV,
  PROTO_LEXPARSE_OP_GET_REPMN_MIN,
  PROTO_LEXPARSE_OP_GET_REPMN_MAX,
  PROTO_LEXPARSE_OP_GET_WIDE_CHAR,
  PROTO_LEXPARSE_OP_GET_DOTCLASS,
} proto_lexparse_opcode_t;

]]))

assert(f:write([[
/* Declare prototypes for main lex function (lex (), fetch a token), and
   the exchange function.  */
typedef fsatoken_token_t proto_lexparse_lex_fn_t (void *lexer_context);

typedef int proto_lexparse_exchange_fn_t (void *lexer_context,
                                          proto_lexparse_opcode_t opcode,
                                          void *parameter);

]]))

-- Finally, add trailer lines (idempotency, vim)
assert(f:write([[

#endif /* PROTO_LEXPARSE_H */

/* vim:set shiftwidth=2: */
]]))

assert(f:close())


------------------------------------------------------------------------------

----------------******** fsalex.h ********----------------

print("Creating fsalex.h...")
local f = assert(io.open("fsalex.h", "w"))
assert(f:write([[
/* fsalex - Repackage pattern text as compact, expressive tokens

]]))
assert(f:write(RawText("Copyright.dfac"), "\n"))
assert(f:write(RawText("LicenseWarranty.dfac"), "\n"))
assert(f:write(RawText("Authors.dfac")))
assert(f:write([[

/* 2014: Repackaged by "untangle" script, written by behoffski.  */

]]))

-- Add preprocessor lines to make this header file idempotent.
assert(f:write([[

#ifndef FSALEX_H
#define FSALEX_H 1

/* Always import environment-specific configuration items first.  */
#include <config.h>

#include "fsatoken.h"
#include "proto-lexparse.h"
#include <regex.h>

/* Multiple lexer instances can exist in parallel, so define an opaque
   type to collect together all the context relating to each instance.  */
typedef struct fsalex_ctxt_struct fsalex_ctxt_t;

]]))

-- Declare a function to create a new lexer state.
WriteExternDecl(f, Decls["lex-new"])

-- Add a function to receive the pattern and reset the state.

assert(f:write([[
/* Receive the pattern, and reset the lexical analyser state.  The
   interpretation of the chars (octets?) in the pattern (ASCII chars?
   variable-length UTF-8 sequences?  Simplified Chinese?  etc.) depends on
   the locale that was in force when fsalex_syntax () was called.  NULs may
   be present amongst the codes, which is why the length is given
   explicitly, rather than relying on strlen(3).  */
extern void
fsalex_pattern (fsalex_ctxt_t *lexer,
                char const *pattern, size_t const pattern_len);

]]))

-- As per dfa_syntax in dfa.c, create a function to receive directives on
-- how to interpret REs.
assert(f:write([[
/* Receive syntax directives, and other pattern interpretation
   instructions such as case folding and end-of-line character.
   In addition, this function configures various internal structures
   based on the locale in force.  */
extern void
fsalex_syntax (fsalex_ctxt_t *lexer,
               reg_syntax_t bits, int fold, unsigned char eol);

]]))

-- While dfa.h declares dfawarn() and dfaerror(), and demands that the client
-- supply functions at link time, we instead provide an interface function
-- so that the functions can be handed over explicitly.  This style may be
-- useful in the future if we want to move from a single lexer instance to
-- multiple instances (objects?)
assert(f:write([[
/* Define function prototypes for warning and error callbacks.  */
typedef void
fsalex_warn_callback_fn (const char *);
typedef void /* ?? _Noreturn? */
fsalex_error_callback_fn (const char *);

/* Receive functions to deal with exceptions detected by the lexer:
   Warnings and errors.  Internally, we add the _Noreturn attribute
   to the error callback, to help the compiler with code flow
   analysis.  */
extern void
fsalex_exception_fns (fsalex_ctxt_t *lexer,
                      fsalex_warn_callback_fn *warningfn,
                      fsalex_error_callback_fn *errorfn);

]]))

-- Add interface to lex() for use by the parser.
assert(f:write([[
/* Main function to incrementally consume and interpret the pattern text,
   and return a token describing a single lexical element as a token,
   perhaps with implied parameters such as character classes for CSET
   tokens, and {min,max} values for each REPMN token.  The user should
   call this function repeatedly, receiving one token each time, until
   the lexer detects a fatal error, or returns the END token.  */
/* This function must conform to proto_lexparse_lex_fn_t.  */
]]))
WriteExternDecl(f, Decls["lex"])

-- Non-core interactions between lexer and parser done by a plug-in function.
WriteExternDecl(f, Decls["lex-exchange"])

-- Add in CASE_FOLDED_BUFSIZE and case_folded_counterparts code
-- added Feb/Mar 2014.  This code makes me a little uneasy as
-- it's potentially showing off internals that I'd like to keep
-- hidden; but I'm fighting battles on too many fronts for now,
-- and will merely add the code as-is for now.
-- 140320: Added fsalex module prefix.
assert(f:write([[
/* Maximum number of characters that can be the case-folded
   counterparts of a single character, not counting the character
   itself.  This is 1 for towupper, 1 for towlower, and 1 for each
   entry in LONESOME_LOWER; see fsalex.c.  */
enum { FSALEX_CASE_FOLDED_BUFSIZE = 1 + 1 + 19 };

extern int fsalex_case_folded_counterparts (fsalex_ctxt_t *lexer,
                            wchar_t,
                            wchar_t[FSALEX_CASE_FOLDED_BUFSIZE]);
]]))

-- Finally, add trailer lines (idempotency, vim)
assert(f:write([[

#endif /* FSALEX_H */

/* vim:set shiftwidth=2: */
]]))

assert(f:close())

----------------******** fsalex.c ********----------------

print("Creating fsalex.c...")
local f = assert(io.open("fsalex.c", "w"))
assert(f:write([[
/* fsalex - Repackage pattern text as compact, expressive tokens

]]))
assert(f:write(RawText("Copyright.dfac"), "\n"))
assert(f:write(RawText("LicenseWarranty.dfac"), "\n"))
assert(f:write(RawText("Authors.dfac")))
assert(f:write([[

/* 2014: Repackaged by "untangle" script, written by behoffski.  */

/* Always import environment-specific configuration items first.  */
#include <config.h>    /* define _GNU_SOURCE for regex extensions.  */

#include <assert.h>
#include "charclass.h"
#include <ctype.h>
#include "fsalex.h"
#include "fsatoken.h"
#include <limits.h>
#include <locale.h>
#include "proto-lexparse.h"
#include <regex.h>
#include <stdbool.h>
#include <stddef.h>
#include <stdlib.h>
#include <string.h>
#include <wctype.h>
#include "xalloc.h"

]]))

-- [[]]

-- Include gettext locale support for strings: _("message")
assert(f:write(RawText("NaturalLangSupport"), "\n"))

-- Define MBS_SUPPORT, if needed include wide-char support macros, and
-- also grab LANGINFO as it can tell us if we're in an UTF-8 locale.
-- assert(f:write(RawText("MultibyteSupport"), "\n"))
-- assert(f:write(RawText("HAVE_LANGINFO_CODESET"), "\n"))

-- General helper macros
assert(f:write(RawText("ISASCIIDIGIT"), "\n"))
assert(f:write(RawText("STREQ"), "\n"))
assert(f:write(RawText("MIN"), "\n"))
assert(f:write(EditedText("REALLOC_IF_NECESSARY",
               " while %(false%)", " while (0)"), "\n"))

-- Define the predicate *template* list before fleshing out the main lexer
-- struct, as we need to copy the template, verbatim, into the lexer context
-- when a new lexer is initialised (there may be reasons, such as locale,
-- that the same predicate might map to different elements in different
-- lexers).
assert(f:write([[
/* The following list maps the names of the Posix named character classes
   to predicate functions that determine whether a given character is in
   the class.  The leading [ has already been eaten by the lexical
   analyzer.  Additional objects are provided to assist the client:
   wchar_desc for multibyte matching, and class for octet matching.
   Lazy evaluation and caching are used to minimise processing costs, so
   these additional items are only valid after a class has been located using
   find_pred ().  */
typedef int predicate_t (wint_t, wctype_t);
typedef struct predicate_entry_struct
{
  const char *name;
  wctype_t wchar_desc;
  charclass_t *class;
} predicate_entry_t;

/* This list is a template, copied into each lexer's state, and interrogated
   and updated from there.  The membership of a class can vary due to locale
   and other settings, so each lexer must maintain its own list.  Duplicate
   class sharing across different lexer instances is facilitated by checks
   in charclass_finalise.  */
/* Locale portability note: We use isalpha_l () etc. functions, with the
   descriptor initialised when fsalex_syntax is called.  */
static predicate_entry_t template_predicate_list[] = {
  {"alpha",  0, NULL},
  {"alnum",  0, NULL},
  {"blank",  0, NULL},
  {"cntrl",  0, NULL},
  {"digit",  0, NULL},
  {"graph",  0, NULL},
  {"lower",  0, NULL},
  {"print",  0, NULL},
  {"punct",  0, NULL},
  {"space",  0, NULL},
  {"upper",  0, NULL},
  {"xdigit", 0, NULL},
  {NULL, 0, NULL}
};

#define PREDICATE_TEMPLATE_ITEMS \
    (sizeof template_predicate_list / sizeof *template_predicate_list)

]]))

assert(f:write([[
/* Multibyte character-class storage.  Unibyte classes are handled in full
   by a comination of charclass and the CSET token with a class index
   parameter.  */
]]))
assert(f:write(RawText("mb_char_classes_struct"), "\n"))

assert(f:write([[
/* Flesh out the opaque instance context type given in the header.  */
struct fsalex_ctxt_struct
{
  /* Using the lexer without setting the syntax is a fatal error, so use a
     flag so we can report such errors in a direct fashion.  */
  bool syntax_initialised;

  /* Exception handling is done by explicit callbacks.  */
  fsalex_warn_callback_fn *warn_client;
  fsalex_error_callback_fn *abandon_with_error;

  /* Pattern pointer/length, updated as pattern is consumed.  */
  char const *lexptr;
  size_t lexleft;

  /* Syntax flags/characters directing how to interpret the pattern.  */
  /* ?? Note: We no longer have a flag here to indicate "syntax_bits_set",
     as was used in dfa.c.  We may want to reintroduce this.  */
  reg_syntax_t syntax_bits;
  bool case_fold;
  unsigned char eolbyte;

  /* Break out some regex syntax bits into boolean vars.  Do this for the
     ones that are heavily used, and/or where the nature of the bitmask flag
     test tends to clutter the lexer code.  */
  bool re_gnu_ops;               /* GNU regex operators are allowed.  */

  /* Carry dotclass here, as it's easier for clients (utf8) to perform
     class operations with this class, rather than to know intimate details
     of the regex syntax configuration bits and items such as eolbyte.  */
  charclass_t *dotclass;         /* ".": All chars except eolbyte and/or
                                        NUL, depending on syntax flags.  */
  charclass_index_t dotclass_index;

  /* Work variables to help organise lexer operation.  */
  fsatoken_token_t lasttok;
  bool laststart;
  size_t parens;

  /* Character class predicate mapping/caching table.  */
  predicate_entry_t predicates[PREDICATE_TEMPLATE_ITEMS];

  /* Minrep and maxrep are actually associated with the REPMN token, and
     need to be accessible outside this module (by the parser), perhaps
     by an explicit interface call.  In the far, far future, a
     completely-reworked token list may see these values properly become
     integrated into the token stream (perhaps by a pair of "Parameter"
     tokens?  Perhaps by a MINREP token with 1 parameter, followed by a
     MAXREP token with a corresponding parameter?)  */
  int minrep, maxrep;

  /* Booleans to simplify unibyte/multibyte code selection paths.  */
  bool unibyte_locale;
  bool multibyte_locale;

  /* REVIEWME: Wide-character support variables.  */
  int cur_mb_len;       /* Length (in bytes) of the last character
                           fetched; this is needed when backing up during
                           lexing.  In a non-multibyte situation (locale?),
                           this variable remains at 1; otherwise, it is
                           updated as required by FETCH_WC.  */

  /* These variables are used only if in a multibyte locale.  */
  wchar_t wctok;         /* Storage for a single multibyte character, used
                           both during lexing, and as the implied parameter
                           of a WCHAR token returned by the lexer.  */
  mbstate_t mbrtowc_state; /* State management area for mbrtowc to use. */

]]))

assert(f:write(EditedText("dfa-struct mbrtowc_cache",
                          "NOTCHAR", "FSATOKEN_NOTCHAR"), "\n"))

assert(f:write(EditedText("dfa-struct bracket-expressions-array",
   "^  /%* Array of the bracket expression in the DFA",
   "  /* Array of multibyte bracket expressions"),
               "\n"))

assert(f:write([[
};

]]))

assert(f:write(EditedText("setbit::wchar_t comment",
        "Even for MB_CUR_MAX == 1,", "Even in unibyte locales,")))
assert(f:write(EditedText("MBS_SUPPORT::setbit_wc",
        "charclass c", "charclass_t *c",
        "\n  setbit %(b, c%)", "\n  charclass_setbit (b, c)"),
               "\n"))

-- Up until today (9 Mar 2014), setbit_case_fold_c from v. 2.17 was a
-- bit of a mess, and well within my sights to clean up.  However, last
-- night, I resynced with the latest git master, and it has been
-- recently cleaned up into a version that's probably better than what I
-- was contemplating.  (My mail feed screwed up just after the 2.17
-- release, and so I missed a week or two's worth of messages.)
assert(f:write(FSATokenSubst(EditedText("setbit_case_fold_c",
          "charclass c", "charclass_t *c",
          "\n      setbit %(i, c%)", "\n      charclass_setbit (i, c)",
          " MB_CUR_MAX must be 1.", "We must be in an unibyte locale.")),
               "\n"))

-- to_uchar converts a char to unsigned char, using a function call
-- rather than merely typecasting as it catches some type errors that
-- the cast doesn't.  This fn is used in FETCH_SINGLE_CHAR.
assert(f:write(RawText("to_uchar_typecheck"), "\n"))

-- Single-byte and multibyte character fetching, rearranged by hand.
-- Merely write dfatombcache verbatim at first; we will need to
-- modify this to work properly.

--assert(f:write(RawText("dfambcache-and-mbs_to_wchar"), "\n"))
assert(f:write(EditedText("dfambcache",
      "dfambcache %(struct dfa %*d", "mb_uchar_cache (fsalex_ctxt_t *lexer",
                          "d%->mbrtowc_cache", "lexer->mbrtowc_cache"),
               "\n"))
--]]

assert(f:write([[
/* This function is intimately connected with multibyte (wide-char) handling
   in the macro FETCH_WC below, in the case where FETCH_SINGLE_CHAR has run
   but the result has been found to be inconclusive.  It works by unwinding
   the FETCH_SINGLE_CHAR side-effects (lexptr/lexleft), then calling
   mbrtowc on the pattern space, and communicates mbrtowc's understanding
   of the octet stream back to the caller:
     - If a valid multibyte octet sequence is next, then the wide character
       associated with this sequence is written back to *p_wchar, and the
       number of octets consumed is returned; or
     - If the sequence is invalid for any reason, the mbrtowc working state
       is reset (zeroed), *p_wchar is not modified, and 1 is returned.
   Lexer state variables, including cur_mb_len, mbs, lexleft and lexptr, are
   updated as appropriate by this function (mainly if mbrtowc succeeds).
   The wide char NUL is unusual as it is a 1-octet sequence, the length
   returned is 0, we report it as length 1, but write the converted wide
   character in temp_wchar to the caller.  */
/* ?? This code, in partnership with the macro FETCH_WC, is closely related
   to mbs_to_wchar in dfa.c.  There is documentation there (e.g. pattern
   must end in a sentinel, shift encodings not supported, plus other
   comments/guarantees) that is important, but I'm deferring writing anything
   up at present until I see how this code is received.  */
static size_t
fetch_offset_wide_char (fsalex_ctxt_t *lexer, wchar_t *p_wchar)
{
  size_t nbytes;
  wchar_t temp_wchar;

  nbytes = mbrtowc (&temp_wchar,
                    lexer->lexptr - 1, lexer->lexleft + 1,
                    &lexer->mbrtowc_state);
  switch (nbytes)
    {
    case (size_t) -2:
    case (size_t) -1:
      /* Conversion failed: Incomplete (-2) or invalid (-1) sequence.  */
      memset (&lexer->mbrtowc_state, 0, sizeof (lexer->mbrtowc_state));
      return 1;

    case (size_t) 0:
      /* This is the wide NUL character, actually 1 byte long. */
      nbytes = 1;
      break;

    default:
      /* Converted character is in temp_wchar, and nbytes is a byte count.  */
      break;
    }
  /* We converted 1 or more bytes, tell result to caller.  */
  *p_wchar = temp_wchar;

  /* Update the number of bytes consumed (offset by 1 since
     FETCH_SINGLE_CHAR grabbed one earlier).  */
  lexer->lexptr  += nbytes - 1;
  lexer->lexleft -= nbytes - 1;

  return nbytes;
}

]]))

assert(f:write([[
/* Single-character input fetch, with EOF/error handling.  Note that
   characters become unsigned here.  If no characters are available,
   the macro either returns END or reports an error, depending on
   eoferr.  Otherwise, one character is consumed (lexptr/lexleft),
   the char is converted into an unsigned char, and is written into
   the parameter c.  */
#define FETCH_SINGLE_CHAR(lexer, c, eoferr)                  \
  do {                                                       \
    if (! (lexer)->lexleft)                                  \
      {                                                      \
        if ((eoferr) != 0)                                   \
          (lexer)->abandon_with_error (eoferr);              \
        else                                                 \
          return FSATOKEN_TK_END;                            \
      }                                                      \
    (c) = to_uchar (*(lexer)->lexptr++);                     \
    (lexer)->lexleft--;                                      \
  } while (0)

/* Do the fetch in stages: Single char, octet+multibyte cache check,
   and possible wide char fetch if the cache result indicates that the
   input sequence is longer than a single octet.  The first fetch handles
   end-of-input cases (if this happens, control never reaches the rest of
   the macro); otherwise, it returns temp_uchar which is used in the cache
   lookup, and may be the single-octet result.  A cache result of WEOF
   means that the octet is not a complete sequence by itself, so a second
   fetch tweaks lexptr/lexleft to undo the single-char-fetch side-effects,
   and, depending on mbrtowc valid/invalid result, propagates either the
   multichar fetch or the single-char fetch back to the caller.  */
# define FETCH_WC(lexer, c, wc, eoferr)                      \
  do {                                                       \
    wchar_t temp_wc;                                         \
    unsigned char temp_uchar;                                \
    (lexer)->cur_mb_len = 1;                                 \
    FETCH_SINGLE_CHAR ((lexer), temp_uchar, (eoferr));       \
    temp_wc = (lexer)->mbrtowc_cache[temp_uchar];            \
    if (temp_wc != WEOF)                                     \
      {                                                      \
        (c)  = temp_uchar;                                   \
        (wc) = temp_wc;                                      \
      }                                                      \
    else                                                     \
      {                                                      \
        size_t nbytes;                                       \
        temp_wc = temp_uchar;                                \
        nbytes = fetch_offset_wide_char ((lexer), &temp_wc); \
        (wc) = temp_wc;                                      \
        (c) = nbytes == 1 ? temp_uchar : EOF;                \
        (lexer)->cur_mb_len = nbytes;                        \
      }                                                      \
  } while (0)

]]))

assert(f:write([[
/* Given a predicate name, find it in a list, and report the list entry
   to the caller.  If the name is not recognised, the function returns NULL.
   The list entry includes a charclass set and (if relevant) a wide-char
   descriptor for testing for the predicate.  Lazy evaluation and caching
   are used to keep processing costs down.  */
static predicate_entry_t *
find_pred (fsalex_ctxt_t *lexer, const char *str)
{
  predicate_entry_t *p_entry;
  charclass_t *work_class;

  for (p_entry = lexer->predicates; p_entry->name; p_entry++)
    {
      if (STREQ (str, p_entry->name))
        break;
    }

  /* If there was no matching predicate name found, return NULL.  */
  if (! p_entry->name)
    return NULL;

  /* Is the charclass pointer NULL for this entry? */
  if (p_entry->class == NULL)
    {
      /* Yes, allocate, set up and cache a charclass for this predicate.  Note
         that the wchar_desc entries were set up in fsalex_syntax ().  */
      int i;
      charclass_index_t index;
      wctype_t wctype_desc;

      wctype_desc = p_entry->wchar_desc;
      work_class = charclass_alloc ();
      for (i = 0; i < FSATOKEN_NOTCHAR; i++)
        {
          wchar_t wc;

          /* Try integer->unsigned char->wide char using lexer's mbrtowc_cache
             array, and, if successful, test for class membership, and set the
             bit in the class if the value is a member.  */
          wc = lexer->mbrtowc_cache[i];
          if (iswctype (wc, wctype_desc))
            charclass_setbit (i, work_class);
        }

      /* Finalise the class, and obtain a persistent class pointer.  */
      index = charclass_finalise (work_class);
      p_entry->class = charclass_get_pointer (index);

    }

  /* Return predicate entry to the caller.  */
  return p_entry;
}

]]))

assert(f:write(EditedText("using-simple-locale",
                          "using_simple_locale %(void%)",
                              "using_simple_locale (fsalex_ctxt_t *lexer)",
                          "MB_CUR_MAX > 1", "lexer->multibyte_locale"),
               "\n"))

-- Write out parse_bracket_exp declaration with manual fsatoken edit
assert(f:write(Decls["parse_bracket_exp"]))

-- Remove zeroclass definition from a nested place inside parse_bracket_exp
-- as we now depend on charclass's guarantee that class index 0 is the
-- zeroset.
local ParseBracketExp = RawText("parse_bracket_exp-body")
ParseBracketExp = TextSubst(ParseBracketExp, [[
  if (MB_CUR_MAX > 1)
    {
      static charclass zeroclass;
      work_mbc->invert = invert;
      work_mbc->cset = equal (ccl, zeroclass) ? -1 : charclass_index (ccl);
      return MBCSET;
    }

  if (invert)
    {
      assert (MB_CUR_MAX == 1);
      notset (ccl);
      if (syntax_bits & RE_HAT_LISTS_NOT_NEWLINE)
        clrbit (eolbyte, ccl);
    }

  return CSET + charclass_index (ccl);
]], [[
  if (lexer->multibyte_locale)
    {
      charclass_t *zeroclass = charclass_get_pointer (0);
      work_mbc->invert = invert;
      work_mbc->cset = charclass_equal (ccl, zeroclass)
                              ? -1 : charclass_finalise (ccl);
      return MBCSET;
    }

  if (invert)
    {
      assert (lexer->unibyte_locale);
      charclass_notset (ccl);
      if (syntax_bits & RE_HAT_LISTS_NOT_NEWLINE)
        charclass_clrbit (eolbyte, ccl);
    }

  return CSET + charclass_finalise (ccl);
]])

-- Classes are now opaque.
ParseBracketExp = TextSubst(ParseBracketExp, [[
{
  bool invert;
  int c, c1, c2;
  charclass ccl;
]], [[
{
  bool invert;
  int c, c1, c2;
  charclass_t *ccl;
]])

-- Replace dfaerror and dfawarn with explicitly-provided functions.
ParseBracketExp = TextSubst(ParseBracketExp,
                            " dfawarn ", " lexer->warn_client ",
                            " dfaerror ", " lexer->abandon_with_error ")

-- Change "work" charclass (ccl) from local variable to explicitly-allocated
-- dynamic class from charclass.
ParseBracketExp = TextSubst(ParseBracketExp, [[
      /* Initialize work area.  */
      work_mbc = &(dfa->mbcsets[dfa->nmbcsets++]);
      memset (work_mbc, 0, sizeof *work_mbc);
    }
  else
    work_mbc = NULL;

  memset (ccl, 0, sizeof ccl);
  FETCH_WC (c, wc, _("unbalanced ["));
]], [[
      /* Initialize work area.  */
      work_mbc = &(dfa->mbcsets[dfa->nmbcsets++]);
      memset (work_mbc, 0, sizeof *work_mbc);
    }
  else
    work_mbc = NULL;

  ccl = charclass_alloc ();
  FETCH_WC (c, wc, _("unbalanced ["));
]])

-- Change find_pred code to use the typedef, and to merge the found predicate
-- static class into the function's dynamic class.  Also replace dfaerror
-- with internal abandon_with_error.
ParseBracketExp = TextSubst(ParseBracketExp, [[
              if (c1 == ':')
                /* Build character class.  POSIX allows character
                   classes to match multicharacter collating elements,
                   but the regex code does not support that, so do not
                   worry about that possibility.  */
                {
                  char const *class
                    = (case_fold && (STREQ (str, "upper")
                                     || STREQ (str, "lower")) ? "alpha" : str);
                  const struct dfa_ctype *pred = find_pred (class);
                  if (!pred)
                    dfaerror (_("invalid character class"));

                  if (MB_CUR_MAX > 1 && !pred->single_byte_only)
                    {
                      /* Store the character class as wctype_t.  */
                      wctype_t wt = wctype (class);

                      REALLOC_IF_NECESSARY (work_mbc->ch_classes,
                                            ch_classes_al,
                                            work_mbc->nch_classes + 1);
                      work_mbc->ch_classes[work_mbc->nch_classes++] = wt;
                    }

                  for (c2 = 0; c2 < NOTCHAR; ++c2)
                    if (pred->func (c2))
                      setbit (c2, ccl);
                }
]], [[
              if (c1 == ':')
                /* Find and merge named character class.  POSIX allows
                   character classes to match multicharacter collating
                   elements, but the regex code does not support that,
                   so do not worry about that possibility.  */
                {
                  char const *class;
                  predicate_entry_t *pred;

                  class = str;
                  if (case_fold && (STREQ (class, "upper")
                                      || STREQ (class, "lower")))
                    class = "alpha";
                  pred = find_pred (lexer, class);
                  if (! pred)
                    lexer->abandon_with_error (_("invalid character class"));
                  charclass_unionset (pred->class, ccl);

                  /* Does this class have a wide-char type descriptor? */
                  if (lexer->multibyte_locale && pred->wchar_desc)
                    {
                      /* Yes, add it to work multibyte-class-desc list.  */
                      REALLOC_IF_NECESSARY (work_mbc->ch_classes,
                                            ch_classes_al,
                                            work_mbc->nch_classes + 1);
                      work_mbc->ch_classes[work_mbc->nch_classes++]
                                 = pred->wchar_desc;
                    }
                }
]])

-- Remove "dfa->" references, as we kludged in static variables for the
-- multibyte class sets for the moment.
-- UPDATED April 2014: Now have moved everything into "lexer" context, so
-- no static variables remain.
ParseBracketExp = TextSubst(ParseBracketExp, [[
  if (MB_CUR_MAX > 1)
    {
      REALLOC_IF_NECESSARY (dfa->mbcsets, dfa->mbcsets_alloc,
                            dfa->nmbcsets + 1);

      /* dfa->multibyte_prop[] hold the index of dfa->mbcsets.
         We will update dfa->multibyte_prop[] in addtok, because we can't
         decide the index in dfa->tokens[].  */

      /* Initialize work area.  */
      work_mbc = &(dfa->mbcsets[dfa->nmbcsets++]);
      memset (work_mbc, 0, sizeof *work_mbc);
    }
  else
    work_mbc = NULL;
]], [[
  if (lexer->multibyte_locale)
    {
      REALLOC_IF_NECESSARY (lexer->mbcsets, lexer->mbcsets_alloc,
                            lexer->nmbcsets + 1);

      /* Initialize work area.  */
      work_mbc = &(lexer->mbcsets[lexer->nmbcsets++]);
      memset (work_mbc, 0, sizeof *work_mbc);
    }
  else
    work_mbc = NULL;
]])

ParseBracketExp = TextSubst(ParseBracketExp, [[
              else if (using_simple_locale ())
                {
                  for (c1 = c; c1 <= c2; c1++)
                    setbit (c1, ccl);
                  if (case_fold)
                    {
                      int uc = toupper (c);
                      int uc2 = toupper (c2);
                      for (c1 = 0; c1 < NOTCHAR; c1++)
                        {
                          int uc1 = toupper (c1);
                          if (uc <= uc1 && uc1 <= uc2)
                            setbit (c1, ccl);
                        }
                    }
]], [[
              else if (using_simple_locale (lexer))
                {
                  for (c1 = c; c1 <= c2; c1++)
                    charclass_setbit (c1, ccl);
                  if (case_fold)
                    {
                      int uc = toupper (c);
                      int uc2 = toupper (c2);
                      for (c1 = 0; c1 < NOTCHAR; c1++)
                        {
                          int uc1 = toupper (c1);
                          if (uc <= uc1 && uc1 <= uc2)
                            charclass_setbit (c1, ccl);
                        }
                    }
]])

-- Keep cracking down on MB_CUR_MAX, case-by-case
ParseBracketExp = TextSubst(ParseBracketExp, [[
      if (MB_CUR_MAX == 1)
        {
          if (case_fold)
            setbit_case_fold_c (c, ccl);
          else
            setbit (c, ccl);
          continue;
        }
]], [[
      if (lexer->unibyte_locale)
        {
          if (case_fold)
            setbit_case_fold_c (c, ccl);
          else
            charclass_setbit (c, ccl);
          continue;
        }
]])
ParseBracketExp = TextSubst(ParseBracketExp, [[
          if (c2 != ']')
            {
              if (c2 == '\\' && (syntax_bits & RE_BACKSLASH_ESCAPE_IN_LISTS))
                FETCH_WC (c2, wc2, _("unbalanced ["));

              if (MB_CUR_MAX > 1)
                {
]], [[
          if (c2 != ']')
            {
              if (c2 == '\\' && (syntax_bits & RE_BACKSLASH_ESCAPE_IN_LISTS))
                FETCH_WC (c2, wc2, _("unbalanced ["));

              if (lexer->multibyte_locale)
                {
]])

-- Hack around with CASE_FOLDED_BUFSIZE code by hand
ParseBracketExp = TextSubst(ParseBracketExp, [[
          wchar_t folded[CASE_FOLDED_BUFSIZE];
          int i, n = case_folded_counterparts (wc, folded);
]], [[
          wchar_t folded[FSALEX_CASE_FOLDED_BUFSIZE];
          int i, n = fsalex_case_folded_counterparts (lexer, wc, folded);
]])

-- Convert module-global variable references to instance-local refs.
ParseBracketExp = TextSubst(ParseBracketExp, [[
                  FETCH_WC (c, wc, _("unbalanced ["));
                  if ((c == c1 && *lexptr == ']') || lexleft == 0)
]], [[
                  FETCH_WC (c, wc, _("unbalanced ["));
                  if ((c == c1 && *lexer->lexptr == ']')
                          || lexer->lexleft == 0)
]])
ParseBracketExp = TextSubst(ParseBracketExp, [[
          /* In the case [x-], the - is an ordinary hyphen,
             which is left in c1, the lookahead character.  */
          lexptr -= cur_mb_len;
          lexleft += cur_mb_len;
]], [[
          /* In the case [x-], the - is an ordinary hyphen,
             which is left in c1, the lookahead character.  */
          lexer->lexptr  -= lexer->cur_mb_len;
          lexer->lexleft += lexer->cur_mb_len;
]])

ParseBracketExp = TextSubst(ParseBracketExp, [=[
          /* A bracket expression like [a-[.aa.]] matches an unknown set.
             Treat it like [-a[.aa.]] while parsing it, and
             remember that the set is unknown.  */
          if (c2 == '[' && *lexptr == '.')
]=], [=[
          /* A bracket expression like [a-[.aa.]] matches an unknown set.
             Treat it like [-a[.aa.]] while parsing it, and
             remember that the set is unknown.  */
          if (c2 == '[' && *lexer->lexptr == '.')
]=])

-- using_simple_locale () now needs lexer param as MB_CUR_MAX describes the
-- current locale, not necessarily the locale of this lexer.
ParseBracketExp = TextSubst(ParseBracketExp, "using_simple_locale ()",
                            "using_simple_locale (lexer)")

-- Rewrite token references to have explicit FSATOKEN_ prefixes.
ParseBracketExp = FSATokenSubst(ParseBracketExp)

-- Rewrite references to variables moved into lexer context.
-- Try pattern search/replace for these variables; this may cause collateral
-- damage, but the alternative is tedious.
for _, Keyword in ipairs{"case_fold", "syntax_bits", "eolbyte", "dotclass"} do
   ParseBracketExp = ParseBracketExp:gsub("([^%w_])" .. Keyword .. "([^%w_])",
                                       "%1lexer->" .. Keyword .. "%2")
end
ParseBracketExp = ParseBracketExp:gsub("  FETCH_WC %(",
                                       "  FETCH_WC (lexer, ")
assert(f:write(ParseBracketExp, "\n"))

-- Recently-added multibyte code: lonesome_lower, case_folded_counterparts
assert(f:write(RawText("unicode-lonesome-lower-table"), "\n"))

assert(f:write([[
int fsalex_case_folded_counterparts (fsalex_ctxt_t *lexer,
                            wchar_t c,
                            wchar_t folded[FSALEX_CASE_FOLDED_BUFSIZE])
]]))
local CaseFoldedCounterpartsBody = RawText("case_folded_counterparts-body")
CaseFoldedCounterpartsBody = TextSubst(CaseFoldedCounterpartsBody, [[
  int i;
  int n = 0;
  wint_t uc = towupper (c);
]], [[
  int i;
  int n = 0;

  /* Exit quickly if there's nothing to be done.  This test was previously
     found on the client side (e.g. fsaparse), but has been moved here as
     we want to keep internals hidden, if it's not too costly.  */
  if (! lexer->case_fold)
    return 0;

  wint_t uc = towupper (c);
]])
assert(f:write(CaseFoldedCounterpartsBody, "\n"))

-- Lex decl and body; lots more lexer-> edits in the body...
assert(f:write(Decls["lex"]))
local LexBody = RawText("lex-body")

-- We set up static "letters" and "notletters" classes (and associated
-- indices) in fsalex_syntax, so the code here can be much, much simpler.
-- Note (again) that "letters" is a rotten name for IS_WORD_CONSTITUENT
-- characters: isalnum() plus '_', but that's how letters/notletters is
-- set up.  We also remove variables c2 and ccl from LexBody.  Finally, we
-- test syntax_initialised fiorst, as leaving this unset is a fatal error.
LexBody = TextSubst(LexBody, [[
{
  unsigned int c, c2;
  bool backslash = false;
  charclass ccl;
  int i;
]], [[
{
  unsigned int c;
  bool backslash = false;
  int i;
  predicate_entry_t *predicate;
  charclass_t *work_class;

  /* Ensure that syntax () has been called on this lexer instance; many
     things will fail if this isn't done.  */
  assert (lexer->syntax_initialised);
]])
-- Replace dfaerror and dfawarn with explicitly-provided functions.
LexBody = LexBody:gsub(" dfawarn ", " lexer->warn_client ")
LexBody = LexBody:gsub(" dfaerror ", " lexer->abandon_with_error ")

-- Deal with some early cases where static vars are now in lexer state
LexBody = TextSubst(LexBody, [[
  for (i = 0; i < 2; ++i)
    {
      FETCH_WC (c, wctok, NULL);
      if (c == (unsigned int) EOF)
        goto normal_char;

      switch (c)
        {
        case '\\':
          if (backslash)
            goto normal_char;
          if (lexleft == 0)
]], [[
  for (i = 0; i < 2; ++i)
    {
      FETCH_WC (c, lexer->wctok, NULL);
      if (c == (unsigned int) EOF)
        goto normal_char;

      switch (c)
        {
        case '\\':
          if (backslash)
            goto normal_char;
          if (lexer->lexleft == 0)
]])

-- Make "not-RE_NO_GNU_OPTS" more readable by using a direct flag variable.
LexBody = TextSubst(LexBody,
                    "!(syntax_bits & RE_NO_GNU_OPS)",
                    "lexer->re_gnu_ops")

LexBody = TextSubst(LexBody, [[
        case 'w':
        case 'W':
          if (!backslash || (syntax_bits & RE_NO_GNU_OPS))
            goto normal_char;
          zeroset (ccl);
          for (c2 = 0; c2 < NOTCHAR; ++c2)
            if (IS_WORD_CONSTITUENT (c2))
              setbit (c2, ccl);
          if (c == 'W')
            notset (ccl);
          laststart = false;
          return lasttok = CSET + charclass_index (ccl);
]],
[==[
         case 'w':
         case 'W':
           /* Can mean "[_[:alnum:]]" (\w) or its inverse (\W).  */
           if (! (backslash && lexer->re_gnu_ops))
             goto normal_char;
           lexer->laststart = false;
           predicate = find_pred (lexer, "alnum");
           work_class = charclass_alloc ();
           charclass_copyset (predicate->class, work_class);
           charclass_setbit ('_', work_class);
           if (c == 'w')
             return FSATOKEN_TK_CSET
                       + charclass_finalise (work_class);
           charclass_notset (work_class);
           return FSATOKEN_TK_CSET
                       + charclass_finalise (work_class);
]==])

-- Handle "\s" and "\S" in the same fashion as "\w" amd "\W".  However,
-- multibyte issues are more of a direct issue in the code.
LexBody = TextSubst(LexBody, [[
        case 's':
        case 'S':
          if (!backslash || (syntax_bits & RE_NO_GNU_OPS))
            goto normal_char;
          if (MB_CUR_MAX == 1)
            {
              zeroset (ccl);
              for (c2 = 0; c2 < NOTCHAR; ++c2)
                if (isspace (c2))
                  setbit (c2, ccl);
              if (c == 'S')
                notset (ccl);
              laststart = false;
              return lasttok = CSET + charclass_index (ccl);
            }

]], [==[
        case 's':
        case 'S':
           /* Can mean "[[:space:]]" (\s) or its inverse (\S).  */
          if (! (backslash && lexer->re_gnu_ops))
            goto normal_char;
          lexer->laststart = false;
          if (lexer->unibyte_locale)
            {
             predicate = find_pred (lexer, "space");
             if (c == 's')
               return FSATOKEN_TK_CSET
                         + charclass_get_index (predicate->class);
             work_class = charclass_alloc ();
             charclass_copyset (predicate->class, work_class);
             charclass_notset (work_class);
             return FSATOKEN_TK_CSET
                       + charclass_finalise (work_class);
            }

]==])

-- Edit match-any (".") to use a static class, in a similar fashion to the
-- substitution above.  The work has been shifted into fsalex_syntax.
LexBody = TextSubst(LexBody, [[
        case '.':
          if (backslash)
            goto normal_char;
          if (MB_CUR_MAX > 1)
            {
              /* In multibyte environment period must match with a single
                 character not a byte.  So we use ANYCHAR.  */
              laststart = false;
              return lasttok = ANYCHAR;
            }
          zeroset (ccl);
          notset (ccl);
          if (!(syntax_bits & RE_DOT_NEWLINE))
            clrbit (eolbyte, ccl);
          if (syntax_bits & RE_DOT_NOT_NULL)
            clrbit ('\0', ccl);
          laststart = false;
          return lasttok = CSET + charclass_index (ccl);
]], [[
        case '.':
          if (backslash)
            goto normal_char;
          lexer->laststart = false;
          if (lexer->multibyte_locale)
            {
              /* In multibyte environment period must match with a single
                 character not a byte.  So we use ANYCHAR.  */
              return lexer->lasttok = FSATOKEN_TK_ANYCHAR;
            }
          return lasttok = FSATOKEN_TK_CSET + lexer->dotclass_index;
]])

-- Finally, edit lexbody where alphanumeric characters are hacked into classes
-- if case folding is selected.  This is the key place that I would like to
-- attack in Stage 2:  Recast the token set so that high-level information is
-- modelled explicitly for longer (especially up to where dfamusts can
-- contemplate forming "case-insensitive fixed strings" as a search option).
LexBody = TextSubst(LexBody, [[
        default:
        normal_char:
          laststart = false;
          /* For multibyte character sets, folding is done in atom.  Always
             return WCHAR.  */
          if (MB_CUR_MAX > 1)
            return lasttok = WCHAR;

          if (case_fold && isalpha (c))
            {
              zeroset (ccl);
              setbit_case_fold_c (c, ccl);
              return lasttok = CSET + charclass_index (ccl);
            }

          return lasttok = c;
]], [[
        default:
        normal_char:
          lexer->laststart = false;
          /* For multibyte character sets, folding is done in atom.  Always
             return WCHAR.  */
          if (lexer->multibyte_locale)
            return lexer->lasttok = FSATOKEN_TK_WCHAR;

          if (case_fold && isalpha (c))
            {
              charclass_t *ccl = charclass_alloc ();
              setbit_case_fold_c (c, ccl);
              return lexer->lasttok = FSATOKEN_TK_CSET
                          + charclass_finalise (ccl);
            }

          return lexer->lasttok = c;
]])

LexBody = TextSubst(LexBody, [[
        case '$':
          if (backslash)
            goto normal_char;
          if (syntax_bits & RE_CONTEXT_INDEP_ANCHORS
              || lexleft == 0
              || (syntax_bits & RE_NO_BK_PARENS
                  ? lexleft > 0 && *lexptr == ')'
                  : lexleft > 1 && lexptr[0] == '\\' && lexptr[1] == ')')
              || (syntax_bits & RE_NO_BK_VBAR
                  ? lexleft > 0 && *lexptr == '|'
                  : lexleft > 1 && lexptr[0] == '\\' && lexptr[1] == '|')
              || ((syntax_bits & RE_NEWLINE_ALT)
                  && lexleft > 0 && *lexptr == '\n'))
            return lasttok = ENDLINE;
          goto normal_char;
]], [[
        case '$':
          if (backslash)
            goto normal_char;
          if (syntax_bits & RE_CONTEXT_INDEP_ANCHORS
              || lexer->lexleft == 0
              || (syntax_bits & RE_NO_BK_PARENS
                  ? lexer->lexleft > 0 && *lexer->lexptr == ')'
                  : lexer->lexleft > 1 && lexer->lexptr[0] == '\\' && lexer->lexptr[1] == ')')
              || (syntax_bits & RE_NO_BK_VBAR
                  ? lexer->lexleft > 0 && *lexer->lexptr == '|'
                  : lexer->lexleft > 1 && lexer->lexptr[0] == '\\' && lexer->lexptr[1] == '|')
              || ((syntax_bits & RE_NEWLINE_ALT)
                  && lexer->lexleft > 0 && *lexer->lexptr == '\n'))
            return lexer->lasttok = ENDLINE;
          goto normal_char;
]])

-- Use temporary minrep/maxrep variables in lexer "{}" processing, and copy
-- these values to lexer->minrep and lexer->maxrep when complete.
LexBody = TextSubst(LexBody, [[
          /* Cases:
             {M} - exact count
             {M,} - minimum count, maximum is infinity
             {,N} - 0 through N
             {,} - 0 to infinity (same as '*')
             {M,N} - M through N */
          {
            char const *p = lexptr;
            char const *lim = p + lexleft;
            minrep = maxrep = -1;
]], [[
          /* Cases:
             {M} - exact count
             {M,} - minimum count, maximum is infinity
             {,N} - 0 through N
             {,} - 0 to infinity (same as '*')
             {M,N} - M through N */
          {
            char const *p = lexer->lexptr;
            char const *lim = p + lexer->lexleft;
            int minrep = -1;
            int maxrep = -1;
]])

LexBody = TextSubst(LexBody, [[
            if (RE_DUP_MAX < maxrep)
              lexer->abandon_with_error (_("Regular expression too big"));
            lexptr = p;
            lexleft = lim - p;
          }
          laststart = false;
          return lasttok = REPMN;
]], [[
            if (RE_DUP_MAX < maxrep)
              lexer->abandon_with_error (_("Regular expression too big"));
            lexer->lexptr = p;
            lexer->lexleft = lim - p;
            lexer->minrep = minrep;
            lexer->maxrep = maxrep;
          }
          lexer->laststart = false;
          return lasttok = REPMN;
]])

LexBody = TextSubst(LexBody, [[
        case '^':
          if (backslash)
            goto normal_char;
          if (syntax_bits & RE_CONTEXT_INDEP_ANCHORS
              || lasttok == END || lasttok == LPAREN || lasttok == OR)
            return lasttok = BEGLINE;
          goto normal_char;
]], [[
        case '^':
          if (backslash)
            goto normal_char;
          if (syntax_bits & RE_CONTEXT_INDEP_ANCHORS
              || lexer->lasttok == END || lexer->lasttok == LPAREN || lexer->lasttok == OR)
            return lasttok = BEGLINE;
          goto normal_char;
]])

-- NOTE: Tabs are expanded to spaces here; this may be unacceptable to some
-- parties and may need to b changed.
LexBody = TextSubst(LexBody, [[
#define PUSH_LEX_STATE(s)			\
  do						\
    {						\
      char const *lexptr_saved = lexptr;	\
      size_t lexleft_saved = lexleft;		\
      lexptr = (s);				\
      lexleft = strlen (lexptr)

#define POP_LEX_STATE()				\
      lexptr = lexptr_saved;			\
      lexleft = lexleft_saved;			\
    }						\
  while (0)
]], [[
#define PUSH_LEX_STATE(s)                       \
  do                                            \
    {                                           \
      char const *lexptr_saved = lexer->lexptr; \
      size_t lexleft_saved = lexer->lexleft;    \
      lexer->lexptr = (s);                      \
      lexer->lexleft = strlen (lexer->lexptr)

#define POP_LEX_STATE()                         \
      lexer->lexptr = lexptr_saved;             \
      lexer->lexleft = lexleft_saved;           \
    }                                           \
  while (0)
]])

-- There's a few remaining "return lasttok = ..." cases in the code
LexBody = TextSubst(LexBody,
                    " return lasttok = ", " return lexer->lasttok = ")
LexBody = TextSubst(LexBody,[[
          lasttok = parse_bracket_exp ();

          POP_LEX_STATE ();

          laststart = false;
          return lasttok;
]], [[
          lexer->lasttok = parse_bracket_exp ();

          POP_LEX_STATE ();

          laststart = false;
          return lexer->lasttok;
]])

-- There's a few remaining " laststart" cases in the code
LexBody = TextSubst(LexBody,
                    " laststart", " lexer->laststart")

-- Deal with "parens" variable, moved into lewxer state
LexBody = TextSubst(LexBody, " ++parens;", " ++lexer->parens;")
LexBody = TextSubst(LexBody, " --parens;", " --lexer->parens;")
LexBody = TextSubst(LexBody, [[
          if (parens == 0 && syntax_bits & RE_UNMATCHED_RIGHT_PAREN_ORD)
]], [[
          if (lexer->parens == 0 && syntax_bits & RE_UNMATCHED_RIGHT_PAREN_ORD)
]])

LexBody = LexBody:gsub(" FETCH_WC %(", " FETCH_WC (lexer, ")
LexBody = LexBody:gsub(" parse_bracket_exp %(%)", " parse_bracket_exp (lexer)")
-- Add fsatoken prefixes after major edits have been done.
LexBody = FSATokenSubst(LexBody)

-- Rewrite references to variables moved into lexer.
-- Try pattern search/replace for these variables; this may cause collateral
-- damage, but the alternative is tedious.
for _, Keyword in ipairs{"case_fold", "syntax_bits", "eolbyte", "dotclass"} do
   LexBody = LexBody:gsub("([^%w_])" .. Keyword .. "([^%w_])",
                           "%1lexer->" .. Keyword .. "%2")
end
assert(f:write(LexBody, "\n"))

assert(f:write(Decls["lex-pattern"]))
assert(f:write([[
{
  /* Copy parameters to internal state variables.  */
  lexer->lexptr = pattern;
  lexer->lexleft = pattern_len;

  /* Reset lexical scanner state.  */
  lexer->lasttok = FSATOKEN_TK_END;
  lexer->laststart = 1;
  lexer->parens = 0;

  /* Reset multibyte parsing state. */
  lexer->cur_mb_len = 1;
  memset(&lexer->mbrtowc_state, 0, sizeof (lexer->mbrtowc_state));
}
]]))

assert(f:write(Decls["lex-syntax"]))
assert(f:write([[
{
  charclass_t *work_class;
  predicate_entry_t *pred;

  /* Set a flag noting that this lexer has had its syntax params set.  */
  lexer->syntax_initialised = true;

  /* Record the function parameters in our local context.  */
  lexer->syntax_bits = bits;
  lexer->case_fold = fold;
  lexer->eolbyte = eol;

  /* Set up unibyte/multibyte flags, based on MB_CUR_MAX, which depends on
     the current locale.  We capture this information here as the locale
     may change later.  At present, we don't capture MB_CUR_MAX itself.  */
  if (MB_CUR_MAX > 1)
    {
      /* Multibyte locale: Prepare booleans to make code easier to read */
      lexer->unibyte_locale = false;
      lexer->multibyte_locale = true;

      /* Set up an array of structures to hold multibyte character sets.  */
      lexer->nmbcsets = 0;
      lexer->mbcsets_alloc = 2;
      lexer->mbcsets = xzalloc (sizeof (*lexer->mbcsets)
                                      * lexer->mbcsets_alloc);
    }
    else
    {
      /* Unibyte locale: Prepare booleans to make code easier to read */
      lexer->unibyte_locale = true;
      lexer->multibyte_locale = false;
    }

  /* Charclass guarantees that class index 0 is zeroclass, so we don't need
     to set it up here.  */

 /* Set up a character class to match anychar ('.'), tailored to
    accommodate options from the regex syntax.  */
  work_class = charclass_alloc ();
  charclass_notset (work_class);
  if (! (lexer->syntax_bits & RE_DOT_NEWLINE))
    {
      charclass_clrbit (lexer->eolbyte, work_class);
    }
  if (lexer->syntax_bits & RE_DOT_NOT_NULL)
    {
      charclass_clrbit (0, work_class);
    }
  lexer->dotclass_index = charclass_finalise (work_class);
  lexer->dotclass = charclass_get_pointer (lexer->dotclass_index);

  /* Testing for the absence of RE_NO_GNU_OPS in syntax_bits happens often,
     so set a direct flag variable:  This makes code more readable.  */
  lexer->re_gnu_ops = ! (lexer->syntax_bits & RE_NO_GNU_OPS);

  /* Initialise cache and other tables that have syntax and/or locale
     influences.  */

  /* Set up the wchar_desc fields of the predicate table.  */
  for (pred = lexer->predicates; pred->name != NULL; pred++)
    pred->wchar_desc = wctype (pred->name);

  /* Add special treatment for class "digit", as it is *always* a single
     octet?  This was done in the past by the "single_byte_only" field in
     the predicate list, and we could bring that treatment back in here if
     we wished, with the following code:  */
#if 0
  /* Search for "digit" predicate, initialise it by hand, and, by setting
     its wchar_desc field to 0, mark it as an always-unibyte class.  */
  for (pred = lexer->predicates; pred->name != NULL; pred++)
    if (STREQ(pred->name, "digit"))
      {
        int i;
        charclass_t *isdigit_work_class;
        charclass_index_t work_index;

        isdigit_work_class = charclass_alloc ();
        for (i = 0; i < FSATOKEN_NOTCHAR; i++)
          if (isdigit (i))
            charclass_setbit (i, isdigit_work_class);
        work_index = charclass_finalise (isdigit_work_class);
        pred->class = charclass_get_pointer (work_index);
        pred->wchar_desc = 0;
        break;
      }
#endif /* 0 */

  /* Initialise first-octet cache so multibyte code dealing with
     single-octet codes can avoid the slow function mbrtowc.  */
  mb_uchar_cache (lexer);
}

]]))

assert(f:write(Decls["lex-exception-fns"]))
assert(f:write([[
{
  /* Record the provided functions in the lexer's context.  */
  lexer->warn_client        = warningfn;
  lexer->abandon_with_error = errorfn;
}

]]))

-- 12 Apr 2014: Support the newly-introduced "exchange" interface
assert(f:write(Decls["lex-exchange"]))
assert(f:write([[
{
  switch (opcode)
    {
    case PROTO_LEXPARSE_OP_GET_IS_MULTIBYTE_ENV:
      return (int) lexer->multibyte_locale;
    case PROTO_LEXPARSE_OP_GET_REPMN_MIN:
      return lexer->minrep;
    case PROTO_LEXPARSE_OP_GET_REPMN_MAX:
      return lexer->maxrep;
    case PROTO_LEXPARSE_OP_GET_WIDE_CHAR:
      *((wchar_t *) param) = lexer->wctok;
      break;
    case PROTO_LEXPARSE_OP_GET_DOTCLASS:
      *((charclass_t **) param) = lexer->dotclass;
      break;
    default:
      /* ?? Not sure if we should complain/assert or merely ignore an opcode
         that we don't recognise here.  */
      break;
    }

    /* If we reach here, return value is unimportant, so just say 0.  */
    return 0;
}

]]))

--[===[
assert(f:write(Decls["lex-fetch-repmn-params"]))
assert(f:write([[
{
  /* Merely copy internal static variables to the caller.  */
  *p_minrep = lexer->minrep;
  *p_maxrep = lexer->maxrep;
}

]]))

assert(f:write(Decls["lex-fetch-wctok"]))
assert(f:write([[
{
  return lexer->wctok;
}

]]))

assert(f:write(Decls["lex-fetch-dotclass"]))
assert(f:write([[
{
  return lexer->dotclass;
}

]]))

--]===]

assert(f:write([[
/* Add "not provided!" stub function that gets called if the client
   fails to provide proper resources.  This is a hack, merely to get the
   module started; better treatment needs to be added later.  */
static void
no_function_provided(void *unused)
{
 assert (!"fsalex: Plug-in function required, but not provided.");
}

]]))

-- Define a "new" function to generate an initial parser context.
assert(f:write(Decls["lex-new"]))
assert(f:write([[
{
  fsalex_ctxt_t *new_context;

  /* Acquire zeroed memory for new lexer context.  */
  new_context = XZALLOC (fsalex_ctxt_t);

  /* ?? Point warning and error functions to a "you need to tell me
     these first!" function? */
  new_context->warn_client        = (fsalex_warn_callback_fn *)
                                    no_function_provided;
  new_context->abandon_with_error = (fsalex_error_callback_fn *)
                                    no_function_provided;

  /* Default to working in a non-multibyte locale.  In some cases, FETCH_WC
     never sets this variable (as it's assumed to be 1), so fulfil this
     expectation here.  */
  new_context->cur_mb_len = 1;

  /* Copy the template predicate list into this context, so that we can
     have lexer-specific named predicate classes.  */
  memcpy (new_context->predicates, template_predicate_list,
         sizeof (new_context->predicates));

  /* Default to unibyte locale at first; the final locale setting is made
     according to what's in force when fsalex_syntax () is called.  */
  new_context->unibyte_locale = true;
  new_context->multibyte_locale = false;

  /* Many things depend on decisions made in fsalex_syntax (), so note here
     that it hasn't been called yet, and fail gracefully later if the client
     hasn't called the function before commencing work.  */
  new_context->syntax_initialised = false;

  return new_context;
}
]]))


-- Finally, add trailer line (vim)
assert(f:write([[
/* vim:set shiftwidth=2: */
]]))

assert(f:close())

------------------------------------------------------------------------------

----------------******** fsaparse.h ********----------------

print("Creating fsaparse.h...")
local f = assert(io.open("fsaparse.h", "w"))
assert(f:write([[
/* fsaparse -- Build a structure naming relationships (sequences, alternatives, options and precedence) of tokens

]]))
assert(f:write(RawText("Copyright.dfac"), "\n"))
assert(f:write(RawText("LicenseWarranty.dfac"), "\n"))
assert(f:write(RawText("Authors.dfac")))
assert(f:write([[

/* 2014: Repackaged by "untangle" script, written by behoffski.  */

/* This function receives a stream of tokens from fsalex, and processes
   them to impose precedence rules and to describe complex pattern elements
   that are beyond the capability of the simple lexer.  In addition to the
   cases explicit in the syntax (e.g."(ab|c)", variable-length multibyte
   encodings (UTF-8; codesets including modifiers and/or shift items) also
   require these enhanced facilities.  */


]]))

-- Add preprocessor lines to make this header file idempotent.
assert(f:write([[
#ifndef FSAPARSE_H
#define FSAPARSE_H 1

/* Always import environment-specific configuration items first.  */
#include <config.h>

#include "fsatoken.h"
#include "proto-lexparse.h"

/* Multiple parser instances can exist in parallel, so define an opaque
   type to collect together all the context relating to each instance.  */
typedef struct fsaparse_ctxt_struct fsaparse_ctxt_t;

/* Allow configurable parser/lexer combinations by using a plugin interface
   for lexer invocation.  */
typedef fsatoken_token_t
fsaparse_lexer_fn_t (void *lexer_context);

]]))

WriteExternDecl(f, Decls["parse-new"])
WriteExternDecl(f, Decls["parse-lexer"])

-- FIXME: The following is a copy-paste-and-rename-edit from the fsalex code.
-- Is there a less redundant way of doing this?

-- While dfa.h declares dfawarn() and dfaerror(), and demands that the client
-- supply functions at link time, we instead provide an interface function
-- so that the functions can be handed over explicitly.  This style may be
-- useful in the future if we want to move from a single lexer instance to
-- multiple instances (objects?)
assert(f:write([[
/* Define function prototypes for warning and error callbacks.  */
typedef void
fsaparse_warn_callback_fn (const char *);
typedef void /* ?? _Noreturn? */
fsaparse_error_callback_fn (const char *);

/* Receive functions to deal with exceptions detected by the parser:
   Warnings and errors.  Internally, we add the _Noreturn attribute
   to the error callback, to help the compiler with code flow
   analysis.  */
extern void
fsaparse_exception_fns (fsaparse_ctxt_t *parser,
                      fsaparse_warn_callback_fn *warningfn,
                      fsaparse_error_callback_fn *errorfn);

]]))

WriteExternDecl(f, Decls["parse"])

WriteExternDecl(f, Decls["parse-get-token-list"])

-- Finally, add trailer lines (idempotency, vim)
assert(f:write([[
#endif /* FSAPARSE_H */

/* vim:set shiftwidth=2: */
]]))

assert(f:close())

----------------******** fsaparse.c ********----------------

print("Creating fsaparse.c...")
local f = assert(io.open("fsaparse.c", "w"))
assert(f:write([[
/* fsaparse -- Build a structure naming relationships (sequences, alternatives,
               backreferences, options and precedence) of tokens

]]))
assert(f:write(RawText("Copyright.dfac"), "\n"))
assert(f:write(RawText("LicenseWarranty.dfac"), "\n"))
assert(f:write(RawText("Authors.dfac")))
assert(f:write([[

/* 2014: Repackaged by "untangle" script, written by behoffski.  */

/* This function receives a stream of tokens from fsalex, and processes
   them to impose precedence rules and to describe complex pattern elements
   that are beyond the capability of the simple lexer.  In addition to the
   cases explicit in the syntax (e.g."(ab|c)", variable-length multibyte
   encodings (UTF-8; codesets including modifiers and/or shift items) also
   require these enhanced facilities.  */

]]))

assert(f:write([[
/* Always import environment-specific configuration items first.  */
#include <config.h>

#include <assert.h>
#include "charclass.h"
#include "fsaparse.h"
#include "fsalex.h"
#include "fsatoken.h"
#include "proto-lexparse.h"
#include <stdbool.h>
#include "xalloc.h"

/* gettext.h ensures that we don't use gettext if ENABLE_NLS is not defined */
#include "gettext.h"
#define _(str) gettext (str)

#include <wchar.h>
#include <wctype.h>

]]))

-- General helper macros
assert(f:write(EditedText("REALLOC_IF_NECESSARY",
               " while %(false%)", " while (0)"), "\n"))

assert(f:write([[
#if HAVE_LANGINFO_CODESET
# include <langinfo.h>
#endif

/* ?? Sigh... wanted to keep multibyte code in fsaPARSE(!) to a minimum, but
   a LOT of code breaks if struct mb_char_classes isn't defined.  */

]]))
assert(f:write(RawText("mb_char_classes_struct"), "\n"))

assert(f:write([[
/* fsaparse_ctxt: Gather all the context to do with the parser into a single
   struct.  We do this mainly because it make it easier to contemplate
   having multiple instances of this module running in parallel, but also
   because it makes translating from "dfa->" easier.  This definition
   fleshes out the opaque type given in the module header.  */
struct fsaparse_ctxt_struct
{
  /* Warning and abot functions provided by client.  */
  fsalex_warn_callback_fn *warn_client;
  fsalex_error_callback_fn *abandon_with_error;

  /* Plug-in functions and context to deal with lexer at arm's length.  */
  proto_lexparse_lex_fn_t *lexer;
  proto_lexparse_exchange_fn_t *lex_exchange;
  void *lex_context;

  /* Information about locale (needs to sync with lexer...?) */
  bool multibyte_locale;
  bool unibyte_locale;

  fsatoken_token_t lookahead_token;

  size_t current_depth;      /* Current depth of a hypothetical stack
                             holding deferred productions.  This is
                             used to determine the depth that will be
                             required of the real stack later on in
                             dfaanalyze.  */
]]))

-- Ugh... chuck in a slab of the dfa struct here, sigh.
assert(f:write(EditedText("dfa-struct parser",
          "  token %*tokens", "  fsatoken_token_t *tokens",
          "  token utf8_anychar_", "  fsatoken_token_t utf8_anychar_")))
assert(f:write(RawText("dfa-struct parser-multibyte")))
assert(f:write(RawText("dfa-struct bracket-expressions-array")))
assert(f:write([[
};

]]))

assert(f:write(EditedText("using_utf8",
                         "\nint\nusing_utf8 ",
                            "\nstatic int\nusing_utf8 "), "\n"))

assert(f:write(RawText("recursive-descent parser intro"), "\n"))

--[[ -- These are now in the fsaparse_ctxt_t sstruct.  */
assert(f:write(EditedText("lookahead token",
                        token tok;", "static fsatoken_token_t tok;")))
assert(f:write(RawText("deferred-prod-stack-depth"), "\n"))
--]]

local Addtok_mbBody = EditedText("addtok_mb",
                          "addtok_mb %(token t,",
           "addtok_mb (fsaparse_ctxt_t *parser, fsatoken_token_t t,",
           "dfa%->", "parser->",
           "MB_CUR_MAX > 1", "parser->multibyte_locale",
           "%-%-depth", "--parser->current_depth",
           "%+%+depth", "++parser->current_depth")
Addtok_mbBody = TextSubst(Addtok_mbBody, [[
  if (depth > parser->depth)
    parser->depth = depth;
]], [[
  if (parser->depth < parser->current_depth)
    parser->depth = parser->current_depth;
]])
assert(f:write(FSATokenSubst(Addtok_mbBody), "\n"))
assert(f:write(EditedText("addtok_wc fwd decl",
         "addtok_wc %(", "addtok_wc (fsaparse_ctxt_t *parser, "),
               "\n"))
local AddtokBody = EditedText("addtok",
           "\naddtok %(token t%)",
           "\naddtok (fsaparse_ctxt_t *parser, fsatoken_token_t t)",
           "dfa%->", "parser->",
           "MB_CUR_MAX > 1", "parser->multibyte_locale",
           "  addtok_wc %(", "  addtok_wc (parser, ",
           "  addtok_mb %(", "  addtok_mb (parser, ",
           "  addtok %(",    "  addtok (parser, ")
assert(f:write(FSATokenSubst(AddtokBody)))
--assert(f:write(FSATokenSubst(RawText("MBS_SUPPORT::addtok_wc start"))))

-- CHANGE, possibly buggy: cur_mb_len appears in clearly-connected places
-- in the lexer (FETCH_WC and parse_bracket_exp); however, it also appears
-- in addtok_wc, in a way that seems to be local-only.
-- I'm choosing to add a local addtok_mb_len variable in addtok_wc, as my
-- belief (after reviewing the code) is that the two uses of the same
-- variable are not related.  Without this addition, the variable is flagged
-- as undeclared, as the other definition is in the lexer module.
local Addtok_wcMBSBody = EditedText("addtok_wc",
           "\naddtok_wc %(", "\naddtok_wc (fsaparse_ctxt_t *parser, ",
           "  addtok_mb %(", "  addtok_mb (parser, ",
           "  addtok %(",    "  addtok (parser, ")
Addtok_wcMBSBody = TextSubst(Addtok_wcMBSBody, [[
  unsigned char buf[MB_LEN_MAX];
  mbstate_t s = { 0 };
  int i;
  size_t stored_bytes = wcrtomb ((char *) buf, wc, &s);

  if (stored_bytes != (size_t) -1)
]], [[
  unsigned char buf[MB_LEN_MAX];
  mbstate_t s = { 0 };
  int i;
  int cur_mb_len;
  size_t stored_bytes = wcrtomb ((char *) buf, wc, &s);

  if (stored_bytes != (size_t) -1)
]])
assert(f:write(FSATokenSubst(Addtok_wcMBSBody)))
--[==[
assert(f:write(FSATokenSubst(RawText("MBS_SUPPORT::addtok_wc else"))))
assert(f:write([[
static void
addtok_wc (fsaparse_ctxt_t *parser, wint_t wc)
{
}
]]))
assert(f:write(FSATokenSubst(RawText("MBS_SUPPORT::addtok_wc end")), "\n"))
--]==]

-- Use beefed-up charclass (setbit_range) and dotclass (from lexer) to
-- construct charclasses for UTF-8 sequences without knowing charclass
-- internals and also without knowing details of regex syntax/configuration
-- regarding end-of-line.  We also depend on charclass's permission to use 0
-- as a "not-initialised" sentinel here.
assert(f:write(EditedText("add_utf8_anychar-decl",
           "\nadd_utf8_anychar %(void%)",
           "\nadd_utf8_anychar (fsaparse_ctxt_t *parser)")))
assert(f:write(RawText("add_utf8_anychar-body-start")))
assert(f:write([[
  unsigned int i;

  /* Have we set up the classes for the 1-byte to 4-byte sequence types?  */
  if (parser->utf8_anychar_classes[0] == 0)
    {
      /* No, first time we've been called, so set them up now.  */
      charclass_t *ccl;
      const charclass_t *dotclass;

      /* Index 0: 80-bf -- Non-leading bytes.  */
      ccl = charclass_alloc ();
      charclass_setbit_range (0x80, 0xbf, ccl);
      parser->utf8_anychar_classes[0] = charclass_finalise (ccl);

      /* Index 1: 00-7f -- 1-byte leading seq, minus dotclass exceptions.  */
      ccl = charclass_alloc ();
      charclass_setbit_range (0x00, 0x7f, ccl);
      fsalex_exchange (parser->lex_context, PROTO_LEXPARSE_OP_GET_DOTCLASS,
                       &dotclass);
      charclass_intersectset (dotclass, ccl);
      parser->utf8_anychar_classes[1] = charclass_finalise (ccl);

      /* Index 2: c2-df -- 2-byte sequence.  */
      ccl = charclass_alloc ();
      charclass_setbit_range (0xc2, 0xdf, ccl);
      parser->utf8_anychar_classes[2] = charclass_finalise (ccl);

      /* Index 2: e0-ef -- 3-byte sequence.  */
      ccl = charclass_alloc ();
      charclass_setbit_range (0xe0, 0xef, ccl);
      parser->utf8_anychar_classes[3] = charclass_finalise (ccl);

      /* Index 2: f0-f7 -- 4-byte sequence.  */
      ccl = charclass_alloc ();
      charclass_setbit_range (0xf0, 0xf7, ccl);
      parser->utf8_anychar_classes[4] = charclass_finalise (ccl);
    }

]]))
assert(f:write(RawText("add_utf8_anychar-description")))
assert(f:write([[
  /* Write out leaf tokens for each of the four possible starting bytes.  */
  for (i = 1; i < 5; i++)
    addtok (parser, FSATOKEN_TK_CSET + parser->utf8_anychar_classes[i]);
  /* Add follow-on classes, plus tokens to build a postfix tree covering all
     four alternatives of valid UTF-8 sequences.  */
  for (i = 1; i <= 3; i++)
    {
      addtok (parser, FSATOKEN_TK_CSET + parser->utf8_anychar_classes[0]);
      addtok (parser, FSATOKEN_TK_CAT);
      addtok (parser, FSATOKEN_TK_OR);
    }
]]))
assert(f:write(RawText("add_utf8_anychar-body-end"), "\n"))

assert(f:write(FSATokenSubst(RawText("Grammar summary comment")), "\n"))

assert(f:write([[
/* Provide a forward declaration for regexp, as it is at the top of the
   parse tree, but is referenced by atom, at the bottom of the tree.  */
static void regexp (fsaparse_ctxt_t *parser);

]]))
--[[]]

local AtomFn = EditedText("atom",
           "  addtok %(",    "  addtok (parser, ",
           "  addtok_wc %(", "  addtok_wc (parser, ",
           "  add_utf8_anychar %(%)",   " add_utf8_anychar (parser)")
AtomFn = TextSubst(AtomFn, [[
static void
atom (void)
{
  if (tok == WCHAR)
    {
      addtok_wc (parser, wctok);

      if (case_fold)
        {
          wchar_t folded[CASE_FOLDED_BUFSIZE];
          int i, n = case_folded_counterparts (wctok, folded);
          for (i = 0; i < n; i++)
            {
              addtok_wc (parser, folded[i]);
              addtok (parser, OR);
            }
        }
]], [[
static void
atom (fsaparse_ctxt_t *parser)
{
  fsatoken_token_t tok = parser->lookahead_token;

  if (tok == WCHAR)
    {
      wchar_t wctok;
      int i, n;
      wchar_t folded[FSALEX_CASE_FOLDED_BUFSIZE];

      fsalex_exchange (parser->lex_context, PROTO_LEXPARSE_OP_GET_WIDE_CHAR,
                       &wctok);
      addtok_wc (parser, wctok);

      n = fsalex_case_folded_counterparts (parser->lex_context,
                                           wctok, folded);
      for (i = 0; i < n; i++)
        {
          addtok_wc (parser, folded[i]);
          addtok (parser, FSATOKEN_TK_OR);
        }
]])
AtomFn = TextSubst(AtomFn, [[
  else if (tok == LPAREN)
    {
      tok = lex ();
      regexp ();
      if (tok != RPAREN)
]], [[
  else if (tok == LPAREN)
    {
      parser->lookahead_token = parser->lexer (parser->lex_context);
      regexp (parser);
      tok = parser->lookahead_token;
      if (tok != RPAREN)
]])
AtomFn = TextSubst(AtomFn,
         "  tok = lex ()",
         "  parser->lookahead_token = parser->lexer (parser->lex_context)")
AtomFn = TextSubst(AtomFn,
         "  dfaerror (_(",
         "  parser->abandon_with_error (_(")
assert(f:write(FSATokenSubst(AtomFn), "\n"))
assert(f:write(FSATokenSubst(EditedText("nsubtoks",
           "\nnsubtoks %(",  "\nnsubtoks (fsaparse_ctxt_t *parser, ",
           " nsubtoks %(",   " nsubtoks (parser, ",
           "dfa%->",         "parser->")),
               "\n"))
assert(f:write(FSATokenSubst(EditedText("copytoks",
           "\ncopytoks %(",  "\ncopytoks (fsaparse_ctxt_t *parser, ",
           " addtok_mb %(",  " addtok_mb (parser, ",
           "dfa%->",         "parser->",
           "MB_CUR_MAX > 1", "parser->multibyte_locale")),
               "\n"))

assert(f:write([[
/* Rewriting fsaparse:closure () from scratch; original is clever but a
   little tricky to follow, so I'm trying to break up a while + compound-if
   loop into a simpler construct (more like a finite-state machine).  Also,
   edits such as replacing "dfa->" with "parser->" are done here, adding
   "parser" as a parameter in lots of places, as well as the long-winded
   FSATOKEN_TK_" prefix.

   I'm not sure if this version is an improvement over the original; the
   need to use "parser->lookahead_token" instead of "tok" influenced my
   decision to try this... but the jury is still out.  */
static void
closure (fsaparse_ctxt_t *parser)
{
restart_closure:
  atom (parser);
  for (;;)
    {
      switch (parser->lookahead_token)
        {
          case FSATOKEN_TK_QMARK:
          case FSATOKEN_TK_STAR:
          case FSATOKEN_TK_PLUS:
            addtok (parser, parser->lookahead_token);
            parser->lookahead_token = parser->lexer (parser->lex_context);
            continue;

          case FSATOKEN_TK_REPMN:
            /* REPMN needs extra work; move outside the switch statement.  */
            break;

          default:
            /* Merely let the intial atom call stand as our return result.  */
            return;
        }

      /* Deal with REPMN{min, max} cases in a separate block.  */
      {
        int i;
        size_t prev_sub_index, ntokens;
        int minrep, maxrep;

        /* Get the {min, max} pair decoded by the lexer.  */
        minrep = parser->lex_exchange (parser->lex_context,
                                       PROTO_LEXPARSE_OP_GET_REPMN_MIN,
                                       NULL);
        maxrep = parser->lex_exchange (parser->lex_context,
                                       PROTO_LEXPARSE_OP_GET_REPMN_MAX,
                                       NULL);

        /* Find out how many tokens are in the peceding token list that are
           covered by this REPMN directive.  This involves carefully working
           backwards through the linear, postfix token ordering.  */
        ntokens = nsubtoks (parser, parser->tindex);

        /* If min and max are both zero, merely remove preceding
           subexpression, get a new token, and restart the atom/closure
           processing from the top of the function.  Not sure if people will
           like this goto statement, but we'll give it a whirl.   */
        if (minrep == 0 && maxrep == 0)
          {
            parser->tindex -= ntokens;
            parser->lookahead_token = parser->lexer (parser->lex_context);
            goto restart_closure;
          }

        /* Non-zero min or max, defined as follows:
             {n}   The preceding item is matched exactly n times.
             {n,}  The preceding item is matched n or more times.
             {,m}  The preceding item is matched at most m times (GNU ext.)
             {n,m} The preceding item is matched at least n, but not more
                   than m times.
           For {n,} and {,m} cases, the omitted parameter is reported here
           as a negative value.  */
        prev_sub_index = parser->tindex - ntokens;
        if (maxrep < 0)
          addtok (parser, FSATOKEN_TK_PLUS);
        if (minrep == 0)
          addtok (parser, FSATOKEN_TK_QMARK);
        for (i = 1; i < minrep; ++i)
          {
            copytoks (parser, prev_sub_index, ntokens);
            addtok (parser, FSATOKEN_TK_CAT);
          }
        for (; i < maxrep; ++i)
          {
            copytoks (parser, prev_sub_index, ntokens);
            addtok (parser, FSATOKEN_TK_QMARK);
            addtok (parser, FSATOKEN_TK_CAT);
          }
        /* Prime the parser with the next token after REPMN and loop.  */
        parser->lookahead_token = parser->lexer (parser->lex_context);
      }
    }
}

]]))

--[==[
local ClosureBody = EditedText("closure",
           "tok = lex %(%)",
     "parser->lookahead_token = parser->lexer (parser->lex_context)",
           "dfa%->",         "parser->",
           " nsubtoks %(",  " nsubtoks (parser, ",
           " copytoks %(",  " copytoks (parser, ",
           " addtok %(",    " addtok (parser, ",
           " atom %(%)",    " atom (parser)",
           " closure %(%)", " closure (parser)")

ClosureBody = TextSubst(ClosureBody, [[
static void
closure (void)
{
  int i;
  size_t tindex, ntokens;

  atom (parser);
  while (tok == QMARK || tok == STAR || tok == PLUS || tok == REPMN)
    if (tok == REPMN && (minrep || maxrep))
]], [[
static void
closure (fsaparse_ctxt_t *parser)
{
  int i;
  size_t tindex, ntokens;
  int minrep, maxrep;
  fsatoken_token_t tok;

  atom (parser);
  tok  = parser->lookahead_token;
  if (tok == REPMN)
    {
      minrep = parser->lex_exchange(parser->lex_context,
                                    PROTO_LEXPARSE_OP_GET_REPMN_MIN,
                                    NULL);
      maxrep = parser->lex_exchange(parser->lex_context,
                                    PROTO_LEXPARSE_OP_GET_REPMN_MAX,
                                    NULL);
    }
  while (tok == QMARK || tok == STAR || tok == PLUS || tok == REPMN)
    if (tok == REPMN && (minrep || maxrep))
]])

ClosureBody = TextSubst(ClosureBody, [[
        parser->lookahead_token = parser->lexer (parser->lex_context);
]], [[
        parser->lookahead_token = parser->lexer (parser->lex_context);
        tok = parser->lookahead_token;
]])

assert(f:write(FSATokenSubst(ClosureBody), "\n"))

--]==]

local BranchBody = EditedText("branch",
           "\nbranch %(void%)", "\nbranch (fsaparse_ctxt_t *parser)",
           " addtok %(",     " addtok (parser, ",
           " closure %(%)",  " closure (parser)")
BranchBody = TextSubst(BranchBody, [[
{
  closure (parser);
  while (tok != RPAREN && tok != OR && tok >= 0)
    {
      closure (parser);
      addtok (parser, CAT);
    }
]], [[
{
  fsatoken_token_t tok;

  closure (parser);
  tok = parser->lookahead_token;
  while (tok != RPAREN && tok != OR && tok >= 0)
    {
      closure (parser);
      tok = parser->lookahead_token;
      addtok (parser, CAT);
    }
]])
assert(f:write(FSATokenSubst(BranchBody), "\n"))

local RegexpBody = EditedText("regexp",
           "\nregexp %(void%)", "\nregexp (fsaparse_ctxt_t *parser)",
           "dfa%->",         "parser->",
           " addtok %(",    " addtok (parser, ",
           " atom %(%)",    " atom (parser)",
           " branch %(%)",  " branch (parser)",
           " closure %(%)", " closure (parser)")
RegexpBody = TextSubst(RegexpBody,
         "  tok = lex ()",
         "  parser->lookahead_token = parser->lexer (parser->lex_context)")
RegexpBody = TextSubst(RegexpBody, [[
  while (tok == OR)
    {
]], [[
  while (parser->lookahead_token == OR)
    {
]])

assert(f:write(FSATokenSubst(RegexpBody), "\n"))

assert(f:write(Decls["parse"]))

-- Rewrite body of fsaparse_parse (dfaparse) without substitutions, as much
-- of the initialisation code here has been made redundant as the client can
-- instantiate and configure the lexer independently.
assert(f:write([[
{
  /* Obtain an initial token for lookahead, and keep tracking tree depth.  */
  parser->lookahead_token = parser->lexer (parser->lex_context);
  parser->current_depth = parser->depth;

  /* Run regexp to manage the next level of parsing.  */
  regexp (parser);
  if (parser->lookahead_token != FSATOKEN_TK_END)
    parser->abandon_with_error (_("unbalanced )"));

  /* If multiple expressions are parsed, second and subsequent patters are
     presented as alternatives to preceding patterns.  */
  addtok (parser, FSATOKEN_TK_END - parser->nregexps);
  addtok (parser, FSATOKEN_TK_CAT);
  if (parser->nregexps)
    addtok (parser, FSATOKEN_TK_OR);

  ++parser->nregexps;
}

]]))

--[==[
local Parse = EditedText("dfaparse-body",
           " = lex %(%)",     " = fsalex_lex (NULL)",
           "dfa%->",          "parser->",
           " addtok %(",     " addtok (parser, ",
           " atom %(%)",     " atom (parser)",
           " branch %(%)",   " branch (parser)",
           " closure %(%)",  " closure (parser)",
           " regexp %(%)",   " regexp (parser)")

Parse = TextSubst(Parse, [[
  lexptr = s;
  lexleft = len;
  lasttok = END;
  laststart = 1;
  parens = 0;
]], [[
  fsalex_pattern (s, len);
]])
assert(f:write(FSATokenSubst(Parse), "\n"))
--]==]

assert(f:write([[
/* Receive functions to deal with exceptions detected by the parser:
   Warnings and errors.  Internally, we add the _Noreturn attribute
   to the error callback, to help the compiler with code flow
   analysis.  */
extern void
fsaparse_exception_fns (fsaparse_ctxt_t *parser,
                      fsaparse_warn_callback_fn *warningfn,
                      fsaparse_error_callback_fn *errorfn)
{
  /* Exception handling is done by explicit callbacks.  */
  parser->warn_client = warningfn;
  parser->abandon_with_error = errorfn;
}

/* Add "not provided!" stub function that gets called if the client
   fails to provide proper resources.  This is a hack, merely to get the
   module started; better treatment needs to be added later.  */
static void
no_function_provided (void *unused)
{
 assert (!"fsaparse: Plug-in function required, but not provided.");
}

]]))

assert(f:write(Decls["parse-lexer"]))
assert(f:write([[
{
  bool is_multibyte;

  /* Record supplied lexer function and context for use later.  */
  parser->lex_context  = lexer_context;
  parser->lexer        = lex_fn;
  parser->lex_exchange = lex_exchange_fn;

  /* Query lexer to get multibyte nature of this locale.  */
  is_multibyte = lex_exchange_fn (lexer_context,
                                  PROTO_LEXPARSE_OP_GET_IS_MULTIBYTE_ENV,
                                  NULL);
  parser->multibyte_locale = is_multibyte;
  parser->unibyte_locale = ! is_multibyte;
}
]]))


-- Define a "new" function to generate an initial parser context.
assert(f:write(Decls["parse-new"]))
assert(f:write([[
{
  fsaparse_ctxt_t *new_context;

  /* Acquire zeroed memory for new parser context.  */
  new_context = XZALLOC (fsaparse_ctxt_t);

  /* ?? Point warning, error and lexer functions to a "you need to tell me
     these first!" function? */
  new_context->warn_client        = (fsaparse_warn_callback_fn *)
                                    no_function_provided;
  new_context->abandon_with_error = (fsaparse_error_callback_fn *)
                                    no_function_provided;
  new_context->lexer              = (fsaparse_lexer_fn_t  *)
                                    no_function_provided;

  /* Default to unibyte locale... but we should synchronise with lexer. */
  new_context->multibyte_locale = false;
  new_context->unibyte_locale = true;

  return new_context;
}
]]))

-- FSAMusts, and also debugging users, use the final token list generated
-- by the parser, so provide an interface for them to access the list.
assert(f:write(Decls["parse-get-token-list"]))
assert(f:write([[
{
  *nr_tokens  = parser->tindex;
  *token_list = parser->tokens;
}
]]))


-- Finally, add trailer lines (vim)
assert(f:write([[
/* vim:set shiftwidth=2: */
]]))

assert(f:close())

------------------------------------------------------------------------------

----------------******** fsamusts.h ********----------------

print("Creating fsamusts.h...")
local f = assert(io.open("fsamusts.h", "w"))
assert(f:write([[
/* fsamusts -- Report a list of must-have simple strings in the pattern

]]))
assert(f:write(RawText("Copyright.dfac"), "\n"))
assert(f:write(RawText("LicenseWarranty.dfac"), "\n"))
assert(f:write(RawText("Authors.dfac")))
assert(f:write([[

/* 2014: Repackaged by "untangle" script, written by behoffski.  */

/* ?? Insert long description/discussion here.  */

]]))

-- Add preprocessor lines to make this header file idempotent.
assert(f:write([[
#ifndef FSAMUSTS_H
#define FSAMUSTS_H 1

/* Always import environment-specific configuration items first.  */
#include <config.h>

#include "fsatoken.h"

]]))

-- Rather than accumulating "must" strings inside the internals, and
-- obtaining a pointer to the list at the end, we explicitly name the
-- list in the "must" (note: *not* "musts") call, and receive an
-- updated pointer as the return value.  Because the structure is
-- involved more heavily in the interface, change it to a typedef.
assert(f:write(RawText("dfamust-struct description")))
assert(f:write([[
typedef struct fsamusts_list_element
{
  int exact;
  char *must;
  struct fsamusts_list_element *next;
} fsamusts_list_element_t;

]]))

WriteExternDecl(f, Decls["must"])

-- Finally, add trailer lines (idempotency, vim)
assert(f:write([[
#endif /* FSAMUSTS_H */

/* vim:set shiftwidth=2: */
]]))

assert(f:close())

----------------******** fsamusts.c ********----------------

print("Creating fsamusts.c...")
local f = assert(io.open("fsamusts.c", "w"))
assert(f:write([[
/* fsamusts -- Report a list of must-have simple strings in the pattern

]]))
assert(f:write(RawText("Copyright.dfac"), "\n"))
assert(f:write(RawText("LicenseWarranty.dfac"), "\n"))
assert(f:write(RawText("Authors.dfac")))
assert(f:write([[

/* 2014: Repackaged by "untangle" script, written by behoffski.  */

/* (?? Long description/discussion goes here...) */

]]))

assert(f:write([[
/* Always import environment-specific configuration items first.  */
#include <config.h>

#include <assert.h>
#include "charclass.h"
#include <ctype.h>
#include "fsamusts.h"
#include "fsatoken.h"
#include <stdbool.h>
#include <stddef.h>
#include <stdlib.h>
#include <string.h>
#include "xalloc.h"

#if DEBUG
#include <stdio.h>
#endif /* DEBUG */

]]))
assert(f:write([[
/* XNMALLOC defined here is identical to the ones in gnulib's xalloc.h,
   except that it does not cast the result to "(t *)", and thus may
   be used via type-free MALLOC macros.  Note that we've left out
   XCALLOC here as this module does not use it.  */
#undef XNMALLOC
/* Allocate memory for N elements of type T, with error checking.  */
/* extern t *XNMALLOC (size_t n, typename t); */
# define XNMALLOC(n, t) \
    (sizeof (t) == 1 ? xmalloc (n) : xnmalloc (n, sizeof (t)))

]]))

assert(f:write(RawText("MALLOC"), "\n"))
assert(f:write(RawText("REALLOC"), "\n"))
assert(f:write(RawText("STREQ"), "\n"))

assert(f:write(RawText("'musts' explanation"), "\n"))
assert(f:write(RawText("icatalloc"), "\n"))
assert(f:write(RawText("icpyalloc"), "\n"))
assert(f:write(RawText("istrstr"), "\n"))
assert(f:write(RawText("freelist"), "\n"))
assert(f:write(RawText("enlist"), "\n"))
assert(f:write(RawText("comsubs"), "\n"))
assert(f:write(RawText("addlists"), "\n"))
assert(f:write(RawText("must typedef"), "\n"))
assert(f:write(RawText("inboth"), "\n"))
assert(f:write(RawText("resetmust"), "\n"))

-- Change dfamust to fsamust_must, remove dfa struct, and rename tokens
assert(f:write(Decls["must"]))
local MustBody = EditedText("dfamust definition",
    "  token t;",   "  fsatoken_token_t t;",
    "  token t = ", "  fsatoken_token_t t = ",
    "  prtok ", "  fsatoken_prtok ")
local MustBody = TextSubst(MustBody, [[
  struct dfamust *dm;
]], "")
MustBody = TextSubst(MustBody, "d->tindex", "nr_tokens")
MustBody = TextSubst(MustBody, "d->tokens", "token_list")
MustBody = TextSubst(MustBody, "d->musts", "must_list")
MustBody = TextSubst(MustBody, "MB_CUR_MAX == 1", "unibyte_locale")

MustBody = TextSubst(MustBody, [[
              charclass *ccl = &d->charclasses[t - CSET];
              int j;
              for (j = 0; j < NOTCHAR; j++)
                if (tstbit (j, *ccl))
                  break;
              if (! (j < NOTCHAR))
                break;
              t = j;
              while (++j < NOTCHAR)
                if (tstbit (j, *ccl)
]], [[
              charclass_t *ccl = charclass_get_pointer (t - CSET);
              int j;
              for (j = 0; j < NOTCHAR; j++)
                if (charclass_tstbit (j, ccl))
                  break;
              if (! (j < NOTCHAR))
                break;
              t = j;
              while (++j < NOTCHAR)
                if (charclass_tstbit (j, ccl)
]])

MustBody = TextSubst(MustBody, [[
done:
  if (strlen (result))
    {
      MALLOC (dm, 1);
      dm->exact = exact;
      dm->must = xmemdup (result, strlen (result) + 1);
      dm->next = must_list;
      must_list = dm;
    }
]], [[
done:
  if (strlen (result))
    {
      fsamusts_list_element_t *dm;

      MALLOC (dm, 1);
      dm->exact = exact;
      dm->must = xmemdup (result, strlen (result) + 1);
      dm->next = must_list;
      must_list = dm;
    }
]])

MustBody = TextSubst(MustBody, [[
  free (mp);
}
]], [[
  free (mp);

  return must_list;
}
]])
assert(f:write(FSATokenSubst(MustBody), "\n"))

-- Not needed: assert(f:write(RawText("dfamusts"), "\n"))

-- Finally, add trailer lines (vim)
assert(f:write([[
/* vim:set shiftwidth=2: */
]]))

assert(f:close())

------------------------------------------------------------------------------

----------------******** dfa-prl.c ********----------------

-- dfa-prl.c -- "Parallel" version of dfa.c, which is used to quantify the
-- performance of the new code by running it in parallel with the existing
-- code, and checking that the outputs are identical.

print("Generating dfa-prl.c by copying dfa.c and then applying edits")

os.execute("cp -fp dfa.c dfa-prl.c")

-- Read the entire file into a single segment
local dfaprlc = NewFileTable("dfa-prl.c")
Section("dfa-prl", dfaprlc, 0)
Segs:Tag("dfa-prl.c", 1, #dfaprlc)

-- Apply edits rather crudely, sigh.
-- dfasyntax() needs to be modified to initialise fsalex_syntax();
-- this is also perhaps the best point to add once-off initialisation
-- calls for the new modules.
-- We change dfa-prl.c's lex() function into OriginalLexFunction() (or
-- whatever), introduce a new lex() function that calls the new code and
-- the existing code side-by-side, and compares the result.

local body = RawText("dfa-prl.c")
body = TextSubst(body, [[
#include <assert.h>
#include <ctype.h>
#include <stdio.h>
#include <stddef.h>
#include <stdlib.h>
#include <limits.h>
#include <string.h>
#include <locale.h>
#include <stdbool.h>
]], [[
#include <assert.h>
#include <ctype.h>
#include <stdio.h>
#include <stddef.h>
#include <stdlib.h>
#include <limits.h>
#include <string.h>
#include <locale.h>
#include <stdbool.h>

/* HOOK: Hack in interfaces to new charclass and fsa* modules.  */
#include "charclass.h"
#include "fsatoken.h"
#include "fsalex.h"
#include "fsamusts.h"
#include "fsaparse.h"
#include "proto-lexparse.h"

/* HOOK: File handle for parallel lex/parse debug/log messages */
FILE *pll_log = NULL;

/* HOOK: Static variables to hold opaque parser and lexer contexts.  */
static fsaparse_ctxt_t *parser = NULL;
static fsalex_ctxt_t *lexer = NULL;

static void
HOOK_set_up_fsa_stuff_if_not_done_already (void)
{
  /* If lexer context is present, this function has been run previously.  */
  if (lexer != NULL)
    return;

  /* Create a new lexer instance, and give it error/warning fns  */
  lexer = fsalex_new ();
  fsalex_exception_fns (lexer, dfawarn, dfaerror);

  /* Start with a pool of 10 charclasses.  */
  charclass_initialise (10);

  /* Create a new parser instance, give it error/warning functions,
     and also provide a hook to the lexer.   */
  parser = fsaparse_new ();
  fsaparse_exception_fns (parser, dfawarn, dfaerror);
  fsaparse_lexer (parser, lexer,
                  (proto_lexparse_lex_fn_t *) fsalex_lex,
                  (proto_lexparse_exchange_fn_t *) fsalex_exchange);
}
]])

-- When dfasyntax receives syntax options, also tell fsalex_syntax.
body = TextSubst(body, [[
/* Entry point to set syntax options.  */
void
dfasyntax (reg_syntax_t bits, int fold, unsigned char eol)
{
  unsigned int i;

  syntax_bits_set = 1;
  syntax_bits = bits;
  case_fold = fold != 0;
  eolbyte = eol;

]], [[
typedef struct regex_name_mapping_struct
{
  reg_syntax_t flag;
  const char *name;
} regex_name_mapping_t;

static regex_name_mapping_t regex_names[] = {
  {RE_BACKSLASH_ESCAPE_IN_LISTS, "backslash_escape_in_lists"},
  {RE_BK_PLUS_QM,                "bk_plus_qm"},
  {RE_CHAR_CLASSES,              "char_classes"},
  {RE_CONTEXT_INDEP_ANCHORS,     "context_indep_anchors"},
  {RE_CONTEXT_INDEP_OPS,         "context_indep_ops"},
  {RE_CONTEXT_INVALID_OPS,       "context_invalid_ops"},
  {RE_DOT_NEWLINE,               "dot_newline"},
  {RE_DOT_NOT_NULL,              "dot_not_null"},
  {RE_HAT_LISTS_NOT_NEWLINE,     "hat_lists_not_newline"},
  {RE_INTERVALS,                 "intervals"},
  {RE_LIMITED_OPS,               "limited_ops"},
  {RE_NEWLINE_ALT,               "newline_alt"},
  {RE_NO_BK_BRACES,              "no_bk_braces"},
  {RE_NO_BK_PARENS,              "no_bk_parens"},
  {RE_NO_BK_REFS,                "no_bk_refs"},
  {RE_NO_BK_VBAR,                "no_bk_vbar"},
  {RE_NO_EMPTY_RANGES,           "no_empty_ranges"},
  {RE_UNMATCHED_RIGHT_PAREN_ORD, "unmatched_right_paren_ord"},
  {RE_NO_POSIX_BACKTRACKING,     "no_posix_backtracking"},
  {RE_NO_GNU_OPS,                "no_gnu_ops"},
  {RE_DEBUG,                     "debug"},
  {RE_INVALID_INTERVAL_ORD,      "invalid_interval_ord"},
  {RE_ICASE,                     "icase"},
  {RE_CARET_ANCHORS_HERE,        "caret_anchors_here"},
  {RE_CONTEXT_INVALID_DUP,       "context_invalid_dup"},
  {RE_NO_SUB,                    "no_sub"},
  {0, NULL}
};

/* Entry point to set syntax options.  */
void
dfasyntax (reg_syntax_t bits, int fold, unsigned char eol)
{
  unsigned int i;

  /* Hook: Debug buffer to record search syntax specifications.  */
  static char buf[256];
  char *p_buf;
  char *locale;

  syntax_bits_set = 1;
  syntax_bits = bits;
  case_fold = fold != 0;
  eolbyte = eol;

  HOOK_set_up_fsa_stuff_if_not_done_already ();

  /* HOOK: Tell fsalex module about syntax selections.  */
  fsalex_syntax (lexer, bits, fold, eol);

  /* HOOK: Record syntax selections in debug logfile.  */
  if (! pll_log)
    pll_log = fopen("/tmp/parallel.log", "a");
  locale = setlocale (LC_ALL, NULL);
  fprintf(pll_log, "\nSyntax: Case fold: %d; eol char: %02x; locale: %s",
          fold, (int) eol, locale);
  p_buf = buf;
  *p_buf++ = '\n';
  *p_buf++ = ' ';
  *p_buf   = '\0';
  for (i = 0; regex_names[i].name; i++)
    {
      char flag_char = (bits & regex_names[i].flag) ? '+' : '-';
      p_buf += sprintf(p_buf, " %c%s", flag_char, regex_names[i].name);
      if (strlen (buf) >= 82)
        {
          fprintf (pll_log, "%s", buf);
          p_buf = &buf[2];
          *p_buf = '\0';
        }
    }
  fprintf(pll_log, "%s\n", buf);

]])

-- When dfaparse receive pattern details, also tell fsalex_pattern.
body = TextSubst(body, [[
/* Main entry point for the parser.  S is a string to be parsed, len is the
   length of the string, so s can include NUL characters.  D is a pointer to
   the struct dfa to parse into.  */
void
dfaparse (char const *s, size_t len, struct dfa *d)
{
  dfa = d;
  lexptr = s;
  lexleft = len;
]], [[
/* Main entry point for the parser.  S is a string to be parsed, len is the
   length of the string, so s can include NUL characters.  D is a pointer to
   the struct dfa to parse into.  */
void
dfaparse (char const *s, size_t len, struct dfa *d)
{
  size_t i;
  dfa = d;
  lexptr = s;
  lexleft = len;

  HOOK_set_up_fsa_stuff_if_not_done_already ();

  /* HOOK: Tell fsalex about this pattern.  */
  fsalex_pattern (lexer, s, len);

  /* HOOK: Log debug messages privately so regression tests can be tried.  */
  if (! pll_log)
    pll_log = fopen("/tmp/parallel.log", "a");
  fprintf (pll_log, "Pattern:");
  for (i = 0; i < len; i++)
      fprintf (pll_log, "  %c", isprint (s[i]) ? s[i] : ' ');
  fprintf (pll_log, "\n        ");
  for (i = 0; i < len; i++)
    fprintf (pll_log, " %02x", ((unsigned) s[i]) & 0xff);
  fprintf (pll_log, "\n");

]])

-- Whenever lex is called to get a token, call the original lex and also
-- fsalex_lex in parallel, and compare the results.  Two changes to do
-- this: 1. Rename "lex" to "original_lex".  2. Add a new "lex" function
-- that calls both "original_lex" and then "fsalex_lex", and compares the
-- result.
body = TextSubst(body, [[
static token
lex (void)
{
  unsigned int c, c2;
  bool backslash = false;
]], [[
static token
original_lex (void)
{
  unsigned int c, c2;
  bool backslash = false;
]])
body = TextSubst(body, [[
/* Recursive descent parser for regular expressions.  */

static token tok;               /* Lookahead token.  */
static size_t depth;            /* Current depth of a hypothetical stack
]], [[
static token
lex (void)
{
  token            original_token;
  fsatoken_token_t fsalex_token;

  original_token = original_lex ();
  fsalex_token   = fsalex_lex (lexer);

  fprintf (pll_log, "Token debug: Original, fsalex: %08lx %08lx\n",
         original_token, fsalex_token);

  if (fsalex_token == FSATOKEN_TK_REPMN)
    {
      int x_minrep, x_maxrep;
      x_minrep = fsalex_exchange(lexer,
                                 PROTO_LEXPARSE_OP_GET_REPMN_MIN, NULL);
      x_maxrep = fsalex_exchange(lexer,
                                 PROTO_LEXPARSE_OP_GET_REPMN_MAX, NULL);
      fprintf (pll_log, "       Original REPMN{%d,%d};  ", minrep, maxrep);
      fprintf (pll_log, "  FSATOKEN_TK_REPMN{%d,%d}\n", x_minrep, x_maxrep);
    }

  else if (fsalex_token >= FSATOKEN_TK_CSET)
    {
      size_t index;
      unsigned int * orig_ccl;
      int i;
      charclass_t *charset;
      char *description;
      static char buf[256];
      char *p_buf;

/* Nybble (4bit)-to-char conversion array for little-bit-endian nybbles.  */
static const char *disp_nybble = "084c2a6e195d3b7f";

      /* Report details of the original charclas produced by dfa.c.  */
      index = original_token - CSET;
      p_buf = buf;
      orig_ccl = dfa->charclasses[index];
      for (i = 0; i < CHARCLASS_INTS; i += 2)
        {
          int j = orig_ccl[i];
          *p_buf++ = ' ';
          *p_buf++ = disp_nybble[(j >>  0) & 0x0f];
          *p_buf++ = disp_nybble[(j >>  4) & 0x0f];
          *p_buf++ = disp_nybble[(j >>  8) & 0x0f];
          *p_buf++ = disp_nybble[(j >> 12) & 0x0f];
          *p_buf++ = disp_nybble[(j >> 16) & 0x0f];
          *p_buf++ = disp_nybble[(j >> 20) & 0x0f];
          *p_buf++ = disp_nybble[(j >> 24) & 0x0f];
          *p_buf++ = disp_nybble[(j >> 28) & 0x0f];

          j = orig_ccl[i + 1];
          *p_buf++ = disp_nybble[(j >>  0) & 0x0f];
          *p_buf++ = disp_nybble[(j >>  4) & 0x0f];
          *p_buf++ = disp_nybble[(j >>  8) & 0x0f];
          *p_buf++ = disp_nybble[(j >> 12) & 0x0f];
          *p_buf++ = disp_nybble[(j >> 16) & 0x0f];
          *p_buf++ = disp_nybble[(j >> 20) & 0x0f];
          *p_buf++ = disp_nybble[(j >> 24) & 0x0f];
          *p_buf++ = disp_nybble[(j >> 28) & 0x0f];
        }
      *p_buf++ = '\0';
      fprintf (pll_log, "              original [%3lu]:%s\n", index, buf);

      /* Also report the charclass member details from fsalex etc.  */
      index = fsalex_token - FSATOKEN_TK_CSET;
      charset = charclass_get_pointer (index);
      description = charclass_describe (charset);
      index = charclass_get_index (charset);
      fprintf (pll_log, "    fsalex: [%3lu] %s\n", index, description);
    }

  return original_token;
}

static void
show_musts (const char *title, fsamusts_list_element_t *list)
{
  fsamusts_list_element_t *elem;
  static char buf[256];
  char *p_buf;

  fprintf(pll_log, "\n%s:\n", title);

  p_buf = buf;
  for (elem = list; elem != NULL; elem = elem->next)
    {
      if (((p_buf - buf) + 4 + strlen (elem->must)) > 72)
        {
          fprintf(pll_log, " %s\n", buf);
          p_buf = buf;
        }
        p_buf += sprintf(p_buf, " (%s) >%s<",
                         elem->exact ? "Entire" : "partial",
                         elem->must);
    }
  fprintf(pll_log, "%s\n", buf);
}

/* Recursive descent parser for regular expressions.  */

static token tok;               /* Lookahead token.  */
static size_t depth;            /* Current depth of a hypothetical stack
]])

body = TextSubst(body,
[[
/* Parse and analyze a single string of the given length.  */
void
dfacomp (char const *s, size_t len, struct dfa *d, int searchflag)
{
  dfainit (d);
  dfambcache (d);
  dfaparse (s, len, d);
  dfamust (d);
]],
[[
static fsatoken_token_t
hook_lexer (fsalex_ctxt_t *lexer_context)
{
  fsatoken_token_t temp_token;

  temp_token = fsalex_lex (lexer_context);
  fprintf(pll_log, "hook_lexer: token: %lx\n", temp_token);
  return temp_token;
}

/* Now do the lexing and parsing a SECOND time, this by re-priming the
   lexer with the same pattern, but then calling fsaparse_parse() instead
   dfaparse ().  The list of tokens (postfix order) output by both parsers
   should be identical (assuming that we know from the earler parallel-lex
   trial that the lexers were identical).  */

/* Parse and analyze a single string of the given length.  */
void
dfacomp (char const *s, size_t len, struct dfa *d, int searchflag)
{
  dfainit (d);
  dfambcache (d);
  dfaparse (s, len, d);
  dfamust (d);

  fsalex_pattern (lexer, s, len);
  fsaparse_lexer (parser, lexer,
                  (proto_lexparse_lex_fn_t *) hook_lexer,
                  (proto_lexparse_exchange_fn_t *) fsalex_exchange);
  fsaparse_parse (parser);

  /* YET ANOTHER HACK, 16 April 2014 (was it related to the lunar eclipse
     last night?? !!?? )
     Compare, side-by-side, the list of tokens generated by dfa.c and by
     fsaparse, and write these to the debug log file.  As elsewhere, these
     should be identical, as the modularised code starts as a functional
     clone of the original code.  (Later, if/when tokens are reworked to
     maintain abstractions at a higher level, the token lists will
     differ.)  */
  {
    size_t nr_tokens;
    fsatoken_token_t *token_list;
    size_t i;
    fsamusts_list_element_t *musts;

    fsaparse_get_token_list (parser, &nr_tokens, &token_list);
    fprintf (pll_log, "\ntokens:  original  fsaparse\n");
    for (i = 0; i < MAX (d->tindex, nr_tokens); ++i)
      {
        static char buf[256];
        if (i < d->tindex)
          {
            sprintf (buf, "%02lx", d->tokens[i]);
            fprintf (pll_log, "%17s ", buf);
          }
          else
            fprintf (pll_log, "%17s", "");
        if (i < nr_tokens)
          {
            sprintf (buf, "%02lx", token_list[i]);
            fprintf (pll_log, "%9s", buf);
          }
        fprintf (pll_log, "\n");
      }

    /* And finally, see how extracting musts from dfa.c compares to extracting
       musts via the fsa/charclass family of functions; again, these should
       be identical.  */
    musts = (fsamusts_list_element_t *) d->musts;
    show_musts ("original dfa.c", musts);

    /* ANOTHER UGLY HACK: Rely on dfa.c's case_fold and unibyte locale when
       instructing dfamust how to operate; an "Exchange" function might be
       more appropriate in the short-to-mid-term, but in the longer term,
       the token vocabluary should get more expressive, so that information
       can be conveyed directly.  */
    musts = fsamusts_must (NULL, nr_tokens, token_list,
                           /* dfa.c copy: */ case_fold,
                           /* current (dfa.c) locale: */ MB_CUR_MAX == 1);
    show_musts ("fsa* et al functions", musts);
  }

]])


-- Finally, write the modified dfa code to a separate C file.
local f = assert(io.open("dfa-prl.c", "w"))
assert(f:write(body))
assert(f:close())

----------------******** Makefile.am ********----------------

print("Modifying (if needed) Makefile.am; you may need to re-run automake...")
local Seg = Segs.SegList["grep_SOURCES"]
local t = Seg.RawText
if not t:match(" charclass.c fsatoken.c fsalex.c "
                     .. "fsaparse.c fsamusts.c dfa-prl.c ") then
   t = t:gsub("dfa.c ",
      "charclass.c fsatoken.c fsalex.c fsaparse.c fsamusts.c dfa-prl.c ")

   -- It is very Bad Form to modify the original raw segment, but we're
   -- tired at this point.
   Seg.RawText = t

   -- Write out the modified file; assume backup made separately (Git?)
   WriteFile(makefileam)
end

--------------060207090305090503050301--




Acknowledgement sent to behoffski <behoffski@HIDDEN>:
New bug report received and forwarded. Copy sent to bug-grep@HIDDEN. Full text available.
Report forwarded to bug-grep@HIDDEN:
bug#17280; Package grep. Full text available.
Please note: This is a static page, with minimal formatting, updated once a day.
Click here to see this page with the latest information and nicer formatting.
Last modified: Mon, 25 Nov 2019 12:00:02 UTC

GNU bug tracking system
Copyright (C) 1999 Darren O. Benham, 1997 nCipher Corporation Ltd, 1994-97 Ian Jackson.