Received: (at submit) by debbugs.gnu.org; 24 Nov 2016 22:55:28 +0000
From debbugs-submit-bounces <at> debbugs.gnu.org Thu Nov 24 17:55:27 2016
Received: from localhost ([127.0.0.1]:40800 helo=debbugs.gnu.org)
by debbugs.gnu.org with esmtp (Exim 4.84_2)
(envelope-from <debbugs-submit-bounces <at> debbugs.gnu.org>)
id 1cA2vO-0008V9-H6
for submit <at> debbugs.gnu.org; Thu, 24 Nov 2016 17:55:27 -0500
Received: from md5i.com ([75.151.244.229]:59172)
by debbugs.gnu.org with esmtp (Exim 4.84_2)
(envelope-from <mwd@HIDDEN>) id 1cA2vL-0008Uz-0G
for submit <at> debbugs.gnu.org; Thu, 24 Nov 2016 17:55:24 -0500
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=md5i.com;
s=dkim; h=Content-Type:MIME-Version:Message-ID:Date:Subject:To:From:Sender:
Reply-To:Cc:Content-Transfer-Encoding:Content-ID:Content-Description:
Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:
In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:List-Subscribe:
List-Post:List-Owner:List-Archive;
bh=zrkcfVGbvTeVOde2SDzwWTdVGR8Ag9qa88SJ0+OhtFE=; b=MWY5H8bXaWp5KgXxwSVz53FmE/
7fac93qDQBOK43RZU1q7WagumEpoYFnDB7WC26vUU74N87AQjPdPsXg5W3BvG25ENMpIRXBNeEX6L
7JRIjh39yTl8ymfzivKZ5zqXm;
Received: from md5i by md5i.com with local (Exim 4.87)
(envelope-from <mwd@HIDDEN>) id 1cA2vK-0000ce-Gy
for submit <at> debbugs.gnu.org; Thu, 24 Nov 2016 17:55:22 -0500
From: Michael Welsh Duggan <mwd@HIDDEN>
To: submit <at> debbugs.gnu.org
Subject: CC Mode 5.33 (C/l); error raised on indent
X-Debbugs-Package: cc-mode
Date: Thu, 24 Nov 2016 17:55:22 -0500
Message-ID: <87y4085vz9.fsf@HIDDEN>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.0.50 (gnu/linux)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="=-=-="
X-Spam-Score: -2.5 (--)
X-Debbugs-Envelope-To: submit
X-BeenThere: debbugs-submit <at> debbugs.gnu.org
X-Mailman-Version: 2.1.18
Precedence: list
List-Id: <debbugs-submit.debbugs.gnu.org>
List-Unsubscribe: <https://debbugs.gnu.org/cgi-bin/mailman/options/debbugs-submit>,
<mailto:debbugs-submit-request <at> debbugs.gnu.org?subject=unsubscribe>
List-Archive: <https://debbugs.gnu.org/cgi-bin/mailman/private/debbugs-submit/>
List-Post: <mailto:debbugs-submit <at> debbugs.gnu.org>
List-Help: <mailto:debbugs-submit-request <at> debbugs.gnu.org?subject=help>
List-Subscribe: <https://debbugs.gnu.org/cgi-bin/mailman/listinfo/debbugs-submit>,
<mailto:debbugs-submit-request <at> debbugs.gnu.org?subject=subscribe>
Errors-To: debbugs-submit-bounces <at> debbugs.gnu.org
Sender: "Debbugs-submit" <debbugs-submit-bounces <at> debbugs.gnu.org>
X-Spam-Score: -2.5 (--)
--=-=-=
Content-Type: text/plain
Content-Transfer-Encoding: 8bit
From emacs -Q, load the attached file. Then use the following recipe to
trigger the error:
M-x goto-line RET 334 RET
TAB
After the tab, I get the error:
c-just-after-func-arglist-p: Wrong type argument: integer-or-marker-p, nil
The backtrace is:
Debugger entered--Lisp error: (wrong-type-argument integer-or-marker-p nil)
c-forward-decl-or-cast-1(8107 top nil)
c-just-after-func-arglist-p(8114)
c-guess-basic-syntax()
c-indent-line()
#[0 "\301\302 !\210\303>\203;\212\304 \210o\203 \305\202'\304\305!\210\306\307!\203%\305\202'\310 )i\310 X\2035\311!\202:\212\311!)\207 \207" [indent-line-function syntax-propertize line-end-position (indent-relative indent-relative-maybe) beginning-of-line 0 looking-at "[ ]*$" current-indentation indent-line-to] 3 2103790 nil]()
c-indent-command(nil)
c-indent-line-or-region(nil nil)
funcall-interactively(c-indent-line-or-region nil nil)
call-interactively(c-indent-line-or-region nil nil)
command-execute(c-indent-line-or-region)
--=-=-=
Content-Type: text/x-csrc
Content-Disposition: attachment; filename=hashlib-chaining-break-cc-mode.c
/*
** Copyright (C) 2001-2016 by Carnegie Mellon University.
**
** @OPENSOURCE_LICENSE_START@
** See license information in ../../LICENSE.txt
** @OPENSOURCE_LICENSE_END@
*/
/*
* Copied from hashlib.c HG 5f941b5d4c17 2016-11-11 13:02 -0500
*
*
* This has been modified so that the table is an array of pointers
* and data is maintained in a separate skMemPool. The skMemPool
* contains a structure that contains a pointer, the user's key,
* and the user's value. These pointers allow hash collisions to
* chain using a circular linked list (as used in rwaddrcount).
*
* The memory pools may be sorted to allow the tranversing the data
* in sorted order, but this makes the hash table unusable as a
* hash table (no different that the old version of hashlib).
*
*/
/* File: hashlib.c: implements core hash library. */
#include <silk/silk.h>
RCSIDENT("$Id$");
#include <silk/hashlib.h>
#include <silk/utils.h>
#ifdef HASHLIB_TRACE_LEVEL
#define TRACEMSG_LEVEL HASHLIB_TRACE_LEVEL
#endif
#define TRACEMSG(lvl, x) TRACEMSG_TO_TRACEMSGLVL(lvl, x)
#include <silk/sktracemsg.h>
#if TRACEMSG_LEVEL > 0
#define TRC_FMT "hashlib.c:%d "
#define TRC_ARG __LINE__
#endif
/* Configuration */
/*
* The maximum size (in terms of number of bytes) of an individual
* hash block.
*/
#define HASH_MAX_MEMORY_BLOCK ((uint64_t)(SIZE_MAX >> 3))
/*
* Maximum number of blocks ever allocated, used for sizing the
* array of HashBlocks in the HashTable.
*
* Once the primary block reaches HASH_MAX_MEMORY_BLOCK (the
* maximum block size), new blocks will be allocated until this
* maximum is reached. This value cannot be greater than the
* HASHLIB_ITER_MAX_BLOCKS value defined in hashlib.h.
*/
#define HASH_MAX_BLOCKS 8
#if HASH_MAX_BLOCKS > HASHLIB_ITER_MAX_BLOCKS
#error "HASH_MAX_BLOCKS may not be greater than HASHLIB_ITER_MAX_BLOCKS"
#endif
/*
* When the number of HashBlocks gets to this count, a rehash is
* triggered unless the first block is already at the maximum block
* size.
*
* This value is not static since hashlib_metrics.c may set it.
*/
uint32_t REHASH_BLOCK_COUNT = 4;
/*
* The SECONDARY_BLOCK_FRACTION is used to determine the size
* HashBlocks following the first.
*
* If non-negative, tables 1...HASH_MAX_BLOCKS-1 have size given by
*
* table_size >> SECONDARY_BLOCK_FRACTION
*
* May also have one of the following values:
*
* = -1 means to keep halving
*
* = -2 means to keep halving starting at a secondary block size
* 1/4 of block 0
*
* = -3 means block 1 is 1/2 block 0, and all other blocks are 1/4
* block 0.
*
* = -4 means block 1 is 1/4 block 0, and all other blocks are 1/8
* block 0.
*
* In all cases, the size of blocks REHASH_BLOCK_COUNT through
* HASH_MAX_BLOCKS is fixed.
*
* This value is not static since hashlib_metrics.c may set it.
*/
int32_t SECONDARY_BLOCK_FRACTION = -3;
/*
* The minimum number of entries that may be stored in a block.
* This value must not be less than 256.
*/
#ifndef MIN_BLOCK_ENTRIES
#define MIN_BLOCK_ENTRIES (UINT64_C(1) << 8)
#endif
#if MIN_BLOCK_ENTRIES < 256
#error "The MIN_BLOCK_ENTRIES must be greater than 256"
#endif
/* Distinguished values for block index in the iterator */
#define HASH_ITER_BEGIN -1
#define HASH_ITER_END -2
typedef struct HashBlock_st HashBlock;
typedef struct HashEntry_st HashEntry;
/*
* The data in a HashTable is stored in multiple HashBlock structures.
*/
struct HashBlock_st {
/* Pointer to the next HashBlock */
HashBlock *next_block;
/* Pointer to an array of variable-sized HashEntry */
uint8_t *entries;
/* The table that owns this block */
const HashTable *table;
/* Total capacity of this block as a number of entries */
uint64_t max_entries;
/* Number of occupied entries in the block */
uint64_t num_entries;
};
/** the HashTable structure */
/* typedef struct HashTable_st HashTable; */
struct HashTable_st {
/** HTT_ALLOWDELETION or 0 */
uint8_t options;
/** Storage size of a key in bytes */
uint8_t key_len;
/** Size of a value in bytes */
uint8_t value_len;
/** Point at which to resize (fraction of 255) */
uint8_t load_factor;
/** Number of blocks */
uint8_t num_blocks;
/** Non-zero if rehashing has failed in the past */
uint8_t rehash_failed;
/** Non-zero if hash entries are sorted */
uint8_t is_sorted;
/** Non-zero if we can memset new memory to a value */
uint8_t can_memset_val;
/** Size of key; used as cmp_userdata by hashlib_sort_entries() */
size_t keylen_cmp_userdata;
size_t entry_len;
/** Pointer to representation of an empty value */
uint8_t *no_value;
/** Pointer to representation of a deleted value */
uint8_t *del_value;
/** Comparison function to use for a sorted table */
hashlib_sort_key_cmp_fn cmp_fn;
/** Caller's argument to the cmp_fn comparison function */
void *cmp_userdata;
/** A pointer to this table, so that macros may accept either a
* HashTable or a HashBlock */
//const HashTable *table;
/** The hash table is an array of pointers to HashEntry objects.
* The objects are stored in HashBlocks. */
HashEntry **table;
/** The blocks that hold the HashEntries. There is a linked list
* of these.*/
HashBlock *blocks;
#ifdef HASHLIB_RECORD_STATS
/** Statistics for this table */
hashlib_stats_t ht_stats;
#endif
};
/* HashTable */
struct HashEntry_st {
/** Pointer to the next object for handling collisions */
HashEntry *next;
/** The user's key and value data */
uint8_t data[1];
};
/* LOCAL FUNCTION PROTOTYPES */
/* pull in the code that defines the hash function */
#ifdef HASHLIB_LOOKUP2
/* hash code used up to and including SiLK 2.3.x, defined in
* hashlib-lookup2.c */
unsigned long
hash(
const uint8_t *k,
unsigned long len,
unsigned long initval);
unsigned long
hash2(
unsigned long *k,
unsigned long len,
unsigned long initval);
unsigned long
hash3(
uint8_t *k,
unsigned long len,
unsigned long initval);
#include "hashlib-lookup2.c"
#else
/* hash code used in SiLK 2.4 and beyond, defined in
* hashlib-lookup3.c */
uint32_t
hashword(
const uint32_t *k,
size_t length,
uint32_t initval);
void
hashword2(
const uint32_t *k,
size_t length,
uint32_t *pc,
uint32_t *pb);
uint32_t
hashlittle(
const void *key,
size_t length,
uint32_t initval);
void
hashlittle2(
const void *key,
size_t length,
uint32_t *pc,
uint32_t *pb);
uint32_t
hashbig(
const void *key,
size_t length,
uint32_t initval);
#include "hashlib-lookup3.c"
/* define hash() according to the byte order */
#if SK_BIG_ENDIAN
# define hash hashbig
#else
# define hash hashlittle
#endif
#endif /* HASHLIB_LOOKUP2 */
static HashBlock *
hashlib_create_block(
sk_hashtable_t *table,
uint64_t block_entries);
static void
hashlib_free_block(
HashBlock *block);
static int
hashlib_block_find_entry(
const HashBlock *block,
const uint8_t *key,
uint8_t **entry_pptr);
static int
hashlib_iterate_sorted(
const HashTable *table,
HASH_ITER *iter,
uint8_t **key_pptr,
uint8_t **val_pptr);
/* FUNCTION-LIKE MACROS */
/*
* Compute the maximum number of entries on the largest block in
* the table 'tbl'.
*/
#define HASH_GET_MAX_BLOCK_ENTRIES(tbl) \
((uint64_t)(HASH_MAX_MEMORY_BLOCK/HASH_GET_ENTRY_LEN(tbl)))
/*
* Return true if the HashBlock 'blk' is full; that is, whether
* the number of entries meets or exceeds the load factor.
*/
#define HASH_BLOCK_IS_FULL(blk) \
((blk)->num_entries >= (blk)->block_full)
/*
* Get number of bytes of storage required to hold a key in
* 'tbl', which may be a HashTable or a HashBlock.
*/
#define HASH_GET_KEY_LEN(tbl) \
((tbl)->table->key_len)
/*
* Get number of bytes of storage required to hold a value in
* 'tbl', which may be a HashTable or a HashBlock.
*/
#define HASH_GET_VALUE_LEN(tbl) \
((tbl)->table->value_len)
/*
* Get number of bytes of storage required to hold an entry in
* 'tbl', which may be a HashTable or a HashBlock.
*/
#define HASH_GET_ENTRY_LEN(tbl) \
((tbl)->table->entry_len)
/*
* Get a pointer to the storage key part of 'entry' in
* 'tbl' which may be a HashTable or HashBlock.
*/
#define HASHENTRY_GET_KEY(tbl, entry) \
((entry)->data)
/*
* Get a pointer to the value part of 'entry' in 'tbl'
* which may be a HashTable or HashBlock.
*/
#define HASHENTRY_GET_VALUE(tbl, entry) \
((entry)->data + HASH_GET_KEY_LEN(tbl))
/*
* Set the storage key part of 'entry' in 'tbl' to contain
* the bytes in 'key_bytes'. 'tbl' may be a HashTable or
* HashBlock.
*/
#define HASHENTRY_SET_KEY(tbl, entry, key_bytes) \
memcpy(HASHENTRY_GET_KEY(tbl, entry), (key_bytes), \
HASH_GET_KEY_LEN(tbl))
/*
* Return 1 if the bytes in 'value' match the empty value,
* otherwise 0. 'tbl' may be a table or block.
*/
#define HASH_VALUE_ISEMPTY(tbl, value) \
(0 == memcmp((value), (tbl)->table->no_value, \
HASH_GET_VALUE_LEN(tbl)))
/*
* Return 1 if the value part of the entry at 'entry' matches
* the empty value, otherwise 0. 'tbl' may be a HashTable or a
* HashBlock.
*/
#define HASHENTRY_ISEMPTY(tbl, entry) \
HASH_VALUE_ISEMPTY(tbl, HASHENTRY_GET_VALUE((tbl), (entry)))
/*
* Get a pointer to the entry at index 'hash_index' in 'blk',
* which must be a HashBlock.
*/
#define HASH_ENTRY_AT(blk, hash_index) \
((blk)->data + (HASH_GET_ENTRY_LEN(blk) * (hash_index)))
/*
* Get a pointer to the storage key part of the entry at index
* 'hash_index' in 'blk' which must be a HashBlock.
*/
#define HASH_KEY_AT(blk, hash_index) \
HASHENTRY_GET_KEY((blk), HASH_ENTRY_AT((blk), (hash_index)))
/*
* Get a pointer to the value part of the entry at index
* 'hash_index' in 'blk' which must be a HashBlock.
*/
#define HASH_VALUE_AT(blk, hash_index) \
HASHENTRY_GET_VALUE((blk), HASH_ENTRY_AT((blk), (hash_index)))
/*
* Increment the value of 'member' in the hashlib_stats structure
* for 'tbl', which may be a HashTable or a HashBlock.
*/
#ifndef HASHLIB_RECORD_STATS
#define HASH_STAT_INCR(tbl, member)
#else
/* need to cast away the "const" */
#define HASH_STAT_INCR(tbl, member) \
++(((HashTable *)(tbl)->table)->ht_stats. member )
#endif
#ifdef NDEBUG
#define HASH_ASSERT_SIZE_IS_POWER_2(blk_size)
#else
#define HASH_ASSERT_SIZE_IS_POWER_2(blk_size) \
do { \
uint64_t high_bits; \
BITS_IN_WORD64(&high_bits, (blk_size)); \
assert(1 == high_bits); \
} while(0)
#endif /* NDEBUG */
/* FUNCTION DEFINITIONS */
HashTable *
hashlib_create_table(
uint8_t key_len,
uint8_t value_len,
uint8_t value_type,
uint8_t *no_value,
uint8_t *appdata,
uint32_t appdata_size,
uint64_t estimated_count,
uint8_t load_factor)
{
HashTable *table = NULL;
HashBlock *block = NULL;
uint64_t initial_entries;
/* Validate arguments */
if (0 == key_len || 0 == value_len) {
TRACEMSG(1,(TRC_FMT "create table: invalid width key %u, value %u",
TRC_ARG, key_len, value_len));
assert(0);
return NULL;
}
/* Allocate memory for the table and initialize attributes. */
table = (HashTable*)calloc(1, sizeof(HashTable));
if (table == NULL) {
TRACEMSG(1,(TRC_FMT "Failed to allocate new HashTable.", TRC_ARG));
return NULL;
}
/* Initialize the table structure */
table->table = table;
table->key_len = key_len;
table->value_len = value_len;
table->load_factor = load_factor;
table->entry_len = (sizeof(HashEntry) - offsetof(HashEntry, data)
+ key_len + value_len);
TRACEMSG(3,
(TRC_FMT "key_len %u, value_len %u, entry_len %u, load_factor %u",
TRC_ARG, key_len, value_len, key_len + value_len, load_factor));
/* Application data */
(void)value_type;
(void)appdata;
(void)appdata_size;
/*
* Calculate the number of entres in the initial block. This is a
* power of 2 with at least MIN_BLOCK_ENTRIES entries that
* accomodates the data at a load less than the given load factor.
*/
/* account for the load factor */
initial_entries = (estimated_count << 8) / table->load_factor;
/* compute power of two greater than initial_entries */
initial_entries = UINT64_C(1) << (1 + skIntegerLog2(initial_entries));
TRACEMSG(2,((TRC_FMT "estimated_count %" PRIu64 ", initial_entries %"
PRIu64 " (calculated %" PRIu64 "), min_entries %" PRIu64
", max_entries %" PRIu64),
TRC_ARG, estimated_count, initial_entries,
((estimated_count << 8) / table->load_factor),
MIN_BLOCK_ENTRIES, HASH_GET_MAX_BLOCK_ENTRIES(table)));
if (initial_entries <= MIN_BLOCK_ENTRIES) {
initial_entries = MIN_BLOCK_ENTRIES;
TRACEMSG(2,(TRC_FMT "adjusted initial_entries to %" PRIu64,
TRC_ARG, initial_entries));
} else {
while (initial_entries > HASH_GET_MAX_BLOCK_ENTRIES(table)) {
initial_entries >>= 1;
}
TRACEMSG(2,(TRC_FMT "adjusted initial_entries to %" PRIu64,
TRC_ARG, initial_entries));
assert(initial_entries >= MIN_BLOCK_ENTRIES);
}
TRACEMSG(1,(TRC_FMT "Adding block #0...", TRC_ARG));
/* Start with one block */
table->num_blocks = 1;
table->blocks = hashlib_create_block(table, initial_entries);
if (NULL == table->blocks) {
TRACEMSG(1,(TRC_FMT "Adding block #0 failed.", TRC_ARG));
table->num_blocks = 0;
hashlib_free_table(table);
return NULL;
}
TRACEMSG(1,(TRC_FMT "Added block #%u.",
TRC_ARG, table->num_blocks - 1));
return table;
}
void
hashlib_free_table(
sk_hashtable_t *table)
{
unsigned int k;
if (NULL == table) {
return;
}
TRACEMSG(1,(TRC_FMT "Freeing HashTable...", TRC_ARG));
/* Free all the blocks in the table */
for (k = 0; k < table->num_blocks; ++k) {
TRACEMSG(2,(TRC_FMT "Freeing block #%u", TRC_ARG, k));
hashlib_free_block(table->blocks[k]);
}
/* Free the empty pointer memory */
free(table->no_value);
/* Free the table structure itself */
free(table);
TRACEMSG(1,(TRC_FMT "Freed HashTable.", TRC_ARG));
}
/*
* NOTE: Assumes block_entries is a power of 2. Very important!
*/
static HashBlock *
hashlib_create_block(
sk_hashtable_t *table,
uint64_t block_entries)
{
HashBlock *block;
uint64_t block_bytes;
uint32_t entry_i;
HASH_ASSERT_SIZE_IS_POWER_2(block_entries);
HASH_STAT_INCR(table, blocks_allocated);
block_bytes = block_entries * (uint64_t)HASH_GET_ENTRY_LEN(table);
TRACEMSG(1,((TRC_FMT "Creating block; requesting %#" PRIx64
" %" PRIu32 "-byte entries (%" PRIu64 " bytes)..."),
TRC_ARG, block_entries, HASH_GET_ENTRY_LEN(table),
block_bytes));
#if SIZE_MAX < UINT64_MAX
/* verify we do not overflow the size of a size_t */
if (block_bytes > SIZE_MAX) {
TRACEMSG(1,(TRC_FMT "Cannot create block; size exceeds SIZE_MAX.",
TRC_ARG));
return NULL;
}
#endif
/* Allocate memory for the block and initialize attributes. */
block = (HashBlock*)malloc(sizeof(HashBlock));
if (block == NULL) {
TRACEMSG(1,(TRC_FMT "Failed to allocate new HashBlock.", TRC_ARG));
return NULL;
}
block->data = (uint8_t*)malloc(block_bytes);
if (block->data == NULL) {
free(block);
TRACEMSG(1,(TRC_FMT "Failed to allocate new data block.", TRC_ARG));
return NULL;
}
block->table = table;
block->max_entries = block_entries;
block->num_entries = 0;
block->block_full = table->load_factor * (block_entries >> 8);
/* Copy "empty" value to each entry. Garbage key values are
* ignored, so we don't bother writing to the keys. When the
* application overestimates the amount of memory needed, this can
* be bottleneck. */
if (table->can_memset_val) {
memset(block->data, table->no_value[0], block_bytes);
} else {
uint8_t *data;
/* Initialize 'data' to point to at the value part of the
* first entry in the block; move 'data' to the next value
* at every iteration */
for (entry_i = 0, data = HASH_VALUE_AT(block, 0);
entry_i < block->max_entries;
++entry_i, data += HASH_GET_ENTRY_LEN(table))
{
memcpy(data, table->no_value,
HASH_GET_VALUE_LEN(block));
}
}
return block;
}
static void
hashlib_free_block(
HashBlock *block)
{
size_t i;
/* Free the data and the block itself */
assert(block);
if (table->user_free_fn) {
for (i = 0; i < block->num_entries; ++i) {
table->user_free_fn(/* key , value , ctx */);
}
}
free(block->data);
free(block);
}
/*
* Rehash entire table into a single block.
*/
int
hashlib_rehash(
sk_hashtable_t *table)
{
const uint64_t max_entries = HASH_GET_MAX_BLOCK_ENTRIES(table);
HashBlock *new_block = NULL;
HashBlock *block = NULL;
uint64_t num_entries = 0;
uint64_t initial_entries;
const uint8_t *key_ref;
const uint8_t *val_ref;
uint8_t *entry;
uint8_t *new_entry;
int rv;
unsigned int k;
uint64_t i;
HASH_STAT_INCR(table, rehashes);
if (table->is_sorted) {
TRACEMSG(1,(TRC_FMT "ERROR: Attempt to rehash a sorted HashTable",
TRC_ARG));
assert(0 == table->is_sorted);
return ERR_SORTTABLE;
}
/*
* Count the total number of entries so we know what we need to
* allocate. We base this on the actual size of the blocks,
* and use the power of 2 that's double the smallest power of 2
* bigger than the sum of block sizes. It's justified by the
* intuition that once we reach this point, we've decided that
* we're going to explore an order of magnitude larger
* table. This particular scheme seems to work well in practice
* although it's difficult to justify theoretically--this is a
* rather arbitrary definition of "order of magnitude".
*/
for (k = 0; k < table->num_blocks; ++k) {
num_entries += table->blocks[k]->max_entries;
}
assert(num_entries > 0);
TRACEMSG(1,((TRC_FMT "Rehashing table having %" PRIu64
" %" PRIu32 "-byte entries..."),
TRC_ARG, num_entries, HASH_GET_ENTRY_LEN(table)));
if (num_entries >= max_entries) {
TRACEMSG(1,((TRC_FMT "Too many entries for rehash; "
" num_entries=%" PRIu64 " >= max_entries=%" PRIu64 "."),
TRC_ARG, num_entries, max_entries));
return ERR_OUTOFMEMORY;
}
/* Choose the size for the initial block as the next power of 2
* greater than the number of entries. */
initial_entries = UINT64_C(1) << (1 + skIntegerLog2(num_entries));
if (initial_entries < MIN_BLOCK_ENTRIES) {
initial_entries = MIN_BLOCK_ENTRIES;
}
/* double it once more unless we already have 2^28 entries */
if (((max_entries >> 1) > initial_entries)
&& (initial_entries < UINT64_C(0x10000000)))
{
initial_entries <<= 1;
}
if (initial_entries > max_entries) {
TRACEMSG(1,((TRC_FMT "Will not rehash table; new initial_entries=%"
PRIu64 " > max_entries=%" PRIu64 "."),
TRC_ARG, initial_entries, max_entries));
return ERR_OUTOFMEMORY;
}
TRACEMSG(1,(TRC_FMT "Allocating new rehash block...", TRC_ARG));
/* Create the new block */
new_block = hashlib_create_block(table, initial_entries);
if (new_block == NULL) {
TRACEMSG(1,((TRC_FMT "Allocating rehash block failed for %#" PRIx64
" entries."), TRC_ARG, initial_entries));
return ERR_OUTOFMEMORY;
}
TRACEMSG(1,(TRC_FMT "Allocated rehash block.", TRC_ARG));
/* Walk through each block in the table looking for non-empty
* entries and insert them into the new block. */
for (k = table->num_blocks; k > 0; ) {
--k;
block = table->blocks[k];
TRACEMSG(2,(TRC_FMT "Rehashing entries from block #%u", TRC_ARG, k));
for (i = 0, entry = HASH_ENTRY_AT(block, 0);
i < block->max_entries;
++i, entry += HASH_GET_ENTRY_LEN(block))
{
key_ref = HASHENTRY_GET_KEY(block, entry);
val_ref = HASHENTRY_GET_VALUE(block, entry);
/* If not empty, then copy the entry into the new block */
if (!HASH_VALUE_ISEMPTY(block, val_ref)) {
rv = hashlib_block_find_entry(new_block,
key_ref, &new_entry);
if (rv != ERR_NOTFOUND) {
/* value is not-empty, but we cannot find the key
* in the hash table. either the hashlib code is
* broken, or the user set a value to the
* no_value value and broke the collision
* resolution mechanism. if assert() is active,
* the next line will call abort(). */
TRACEMSG(1,((TRC_FMT "During the rehash, unexpectedly"
" found an existing key in the new block"),
TRC_ARG));
assert(rv == ERR_NOTFOUND);
free(new_block);
table->num_blocks = 1 + k;
return ERR_INTERNALERROR;
}
/* Copy the key and value */
HASHENTRY_SET_KEY(new_block, new_entry, key_ref);
memcpy(HASHENTRY_GET_VALUE(new_block, new_entry),
val_ref, HASH_GET_VALUE_LEN(block));
++new_block->num_entries;
HASH_STAT_INCR(table, rehash_inserts);
}
}
/* Free the block */
hashlib_free_block(block);
table->blocks[k] = NULL;
} /* blocks */
/* Associate the new block with the table */
table->num_blocks = 1;
table->blocks[0] = new_block;
TRACEMSG(1,(TRC_FMT "Rehashed table.", TRC_ARG));
return OK;
}
/*
* Add a new block to a table.
*/
static int
hashlib_add_block(
sk_hashtable_t *table,
uint64_t new_block_entries)
{
HashBlock *block = NULL;
assert(table->num_blocks < HASH_MAX_BLOCKS);
if (table->num_blocks >= HASH_MAX_BLOCKS) {
TRACEMSG(1,((TRC_FMT "Cannot allocate another block:"
" num_blocks=%" PRIu32 " >= HASH_MAX_BLOCKS=%u."),
TRC_ARG, table->num_blocks, HASH_MAX_BLOCKS));
return ERR_NOMOREBLOCKS;
}
/* Create the new block */
TRACEMSG(1,((TRC_FMT "Adding block #%u..."),
TRC_ARG, table->num_blocks));
block = hashlib_create_block(table, new_block_entries);
if (block == NULL) {
TRACEMSG(1,(TRC_FMT "Adding block #%u failed.",
TRC_ARG, table->num_blocks));
return ERR_OUTOFMEMORY;
}
/* Add it to the table */
table->blocks[table->num_blocks] = block;
++table->num_blocks;
TRACEMSG(1,(TRC_FMT "Added block #%u.",
TRC_ARG, table->num_blocks - 1));
return OK;
}
/*
* See what size the next hash block should be.
*/
static uint64_t
hashlib_compute_next_block_entries(
sk_hashtable_t *table)
{
uint64_t block_entries = 0;
/* This condition will only be true when the primary block has
* reached the maximum block size. */
if (table->num_blocks >= REHASH_BLOCK_COUNT) {
return table->blocks[table->num_blocks-1]->max_entries;
}
/* Otherwise, it depends on current parameters */
if (SECONDARY_BLOCK_FRACTION >= 0) {
block_entries =
(table->blocks[0]->max_entries >> SECONDARY_BLOCK_FRACTION);
} else if (SECONDARY_BLOCK_FRACTION == -1) {
/* Keep halving blocks */
block_entries =
(table->blocks[table->num_blocks-1]->max_entries >> 1);
} else if (SECONDARY_BLOCK_FRACTION == -2) {
if (table->num_blocks == 1) {
/* First secondary block is 1/4 size of main block */
block_entries =
table->blocks[table->num_blocks-1]->max_entries >>2;
} else {
/* Other secondary blocks are halved */
block_entries =
table->blocks[table->num_blocks-1]->max_entries >>1;
}
} else if (SECONDARY_BLOCK_FRACTION == -3) {
if (table->num_blocks == 1) {
/* First secondary block is 1/2 size of main block */
block_entries = table->blocks[0]->max_entries >> 1;
} else {
/* All others are 1/4 size of main block */
block_entries = table->blocks[0]->max_entries >> 2;
}
} else if (SECONDARY_BLOCK_FRACTION == -4) {
if (table->num_blocks == 1) {
/* First secondary block is 1/4 size of main block */
block_entries = table->blocks[0]->max_entries >> 2;
} else {
/* All others are 1/8 size of main block */
block_entries = table->blocks[0]->max_entries >> 3;
}
} else {
skAbort();
}
return block_entries;
}
/*
* Algorithm:
* - If the primary block is at its maximum, never rehash, only add
* new blocks.
* - If we have a small table, then don't bother creating
* secondary tables. Simply rehash into a new block.
* - If we've exceeded the maximum number of blocks, rehash
* into a new block.
* - Otherwise, create a new block
*/
static int
hashlib_resize_table(
sk_hashtable_t *table)
{
uint64_t new_block_entries;
int rv;
TRACEMSG(1,(TRC_FMT "Resizing the table...", TRC_ARG));
/* Compute the (potential) size of the new block */
new_block_entries = hashlib_compute_next_block_entries(table);
assert(new_block_entries != 0);
/* If we're at the maximum number of blocks (which implies that
* the first block is at its max, and we can't resize, then that's
* it. */
if (table->num_blocks == HASH_MAX_BLOCKS) {
TRACEMSG(1,((TRC_FMT "Unable to resize table: no more blocks;"
" table contains %" PRIu64 " %" PRIu32 "-byte entries"
" in %" PRIu64 " buckets across %u blocks"),
TRC_ARG, hashlib_count_entries(table),
HASH_GET_ENTRY_LEN(table),
hashlib_count_buckets(table), table->num_blocks));
return ERR_NOMOREBLOCKS;
}
/* If the first block is at its maximum size or if we have tried
* and failed to rehash in the past, then add a new block. Once we
* reach the maximum block size, we don't rehash. Instead we keep
* adding blocks until we reach the maximum. */
if ((table->blocks[0]->max_entries
== HASH_GET_MAX_BLOCK_ENTRIES(table))
|| table->rehash_failed)
{
assert(new_block_entries > MIN_BLOCK_ENTRIES);
return hashlib_add_block(table, new_block_entries);
}
/* If we have REHASH_BLOCK_COUNT blocks, or the new block would be
* too small, we simply rehash. */
if ((new_block_entries < MIN_BLOCK_ENTRIES) ||
(table->num_blocks >= REHASH_BLOCK_COUNT))
{
TRACEMSG(1,((TRC_FMT "Resize table forcing rehash;"
" new_block_entries = %#" PRIx64
"; num_blocks = %u; REHASH_BLOCK_COUNT = %" PRIu32 "."),
TRC_ARG, new_block_entries,
table->num_blocks, REHASH_BLOCK_COUNT));
rv = hashlib_rehash(table);
if (rv != ERR_OUTOFMEMORY) {
return rv;
}
/* rehashing failed. try instead to add a new (small) block */
table->rehash_failed = 1;
if (new_block_entries < MIN_BLOCK_ENTRIES) {
new_block_entries = MIN_BLOCK_ENTRIES;
}
TRACEMSG(1,(TRC_FMT "Rehash failed; creating new block instead...",
TRC_ARG));
}
/* Assert several global invariants */
assert(new_block_entries >= MIN_BLOCK_ENTRIES);
assert(new_block_entries <= HASH_GET_MAX_BLOCK_ENTRIES(table));
assert(table->num_blocks < HASH_MAX_BLOCKS);
/* Otherwise, add new a new block */
return hashlib_add_block(table, new_block_entries);
}
#if 0
static void
assert_not_already_there(
const HashTable *table,
const uint8_t *key)
{
const uint8_t *entry;
const HashBlock *block;
unsigned int k;
int rv;
for (k = 0; k < (table->num_blocks-1); ++k) {
block = table->blocks[k];
rv = hashlib_block_find_entry(block, key, &entry);
if (rv == OK) {
getc(stdin);
}
}
}
#endif /* 0 */
int
hashlib_insert(
sk_hashtable_t *table,
const uint8_t *key,
uint8_t **value_pptr)
{
HashBlock *block = NULL;
uint8_t *entry = NULL;
unsigned int k;
int rv;
assert(table);
HASH_STAT_INCR(table, inserts);
if (table->is_sorted) {
TRACEMSG(1,(TRC_FMT "Attempted an insert into a sorted HashTable",
TRC_ARG));
assert(0 == table->is_sorted);
return ERR_SORTTABLE;
}
/* See if we are ready to do a resize by either adding a block or
* rehashing. */
if (HASH_BLOCK_IS_FULL(table->blocks[table->num_blocks-1])){
rv = hashlib_resize_table(table);
if (rv != OK) {
return rv;
}
}
assert(table->num_blocks);
/* Look in each block for the key */
for (k = 0; k < table->num_blocks; ++k) {
block = table->blocks[k];
if (hashlib_block_find_entry(block, key, &entry) == OK) {
/* Found entry, use it */
*value_pptr = HASHENTRY_GET_VALUE(block, entry);
return OK_DUPLICATE;
}
}
/*
* We did not find it; do an insert into the last block by
* setting the key AND increasing the count. The caller will set
* the value.
*
* NOTE: entry is pointer to the insert location and
* block is pointing at the last block, and this is why we
* first check whether need to grow the table.
*
* NOTE: Since we return a reference to the value, the user could
* either not set the value or mistakenly set the value to the
* 'no_value'. This is problematic, since the internal count
* will have been incremented even though in essence no entry has
* been added. This may lead to growing the table sooner than
* necesssary.
*
* Even worse is if the user updates an existing entry's value to
* the 'no_value' after there has been a collision on that
* entry. Keys that collided can no longer be found in the table.
*/
*value_pptr = HASHENTRY_GET_VALUE(block, entry);
HASHENTRY_SET_KEY(block, entry, key);
++block->num_entries;
return OK;
}
int
hashlib_lookup(
const HashTable *table,
const uint8_t *key,
uint8_t **value)
{
HashEntry *entry;
assert(table);
HASH_STAT_INCR(table, lookups);
if (table->is_sorted) {
TRACEMSG(1,(TRC_FMT "Attempt to lookup in a sorted HashTable",
TRC_ARG));
assert(0 == table->is_sorted);
return ERR_SORTTABLE;
}
if (hashlib_find_entry(table, key, &entry) == OK) {
/* Return pointer to the value in the entry structure */
*value_pptr = HASHENTRY_GET_VALUE(block, entry);
return OK;
}
}
return ERR_NOTFOUND;
}
/*
* If not found, points to insertion point,
*/
static int
hashlib_find_entry(
const HashBlock *block,
const uint8_t *key,
uint8_t **entry_pptr)
{
#ifndef NDEBUG
uint32_t num_tries = 0;
#endif
uint32_t hash_index;
uint32_t hash_value;
uint32_t hash_probe_increment;
#ifdef HASHLIB_RECORD_STATS
int first_check = 1;
#endif
HASH_STAT_INCR(block, find_entries);
/*
* First this code computes the hash for the key, the
* 'hash_value'.
*
* The 'hash_value' is masked by the size of the block to
* determine which bucket to check (the 'hash_index'). Since the
* block size is a power of 2, masking can be used as a modulo
* operation.
*
* If the bucket is empty, this function is done; the function
* passes back a handle to that bucket and returns ERR_NOTFOUND.
*
* If the bucket is not empty, the function checks to see if
* bucket's key matches the key passed into the function. The
* comparison is done by memcmp()ing the keys. If the keys
* match, the entry is found; the function passes back a handle
* the bucket and returns OK.
*
* If the keys do not match, it means multiple keys return the
* same hash value; that is, there is a collision. A new bucket
* is selected by incrementing the 'hash_value' by the
* 'hash_probe_increment' value and masking the result by the
* block size.
*
* The process repeats until either an empty bucket is found or
* the keys match.
*
* This collision resolution mechanism is what makes removal
* impossible. To allow removal, we would need either to define
* a special "deleted-entry" value or to rehash the table after
* each deletion.
*
* Collision resolution is also why the caller should never
* update a bucket's value to the no_value value---though
* there is no way for the hashlib code to enforce this.
*/
hash_value = hash(key, HASH_GET_KEY_LEN(block), 0);
hash_index = hash_value & ((uint32_t)block->max_entries - 1);
*entry_pptr = HASH_ENTRY_AT(block, hash_index);
if (!HASHENTRY_ISEMPTY(block, *entry_pptr)) {
do {
/* compare the keys */
if (0 == memcmp(HASHENTRY_GET_KEY(block, *entry_pptr),
key, HASH_GET_KEY_LEN(block)))
{
/* Found a match, we're done */
return OK;
}
*entry_pptr = (*entry_pptr)->next;
} while (*entry_pptr != HASH_ENTRY_AT(block, hash_index));
}
return ERR_NOTFOUND;
}
#ifdef HASHLIB_RECORD_STATS
void
hashlib_clear_stats(
sk_hashtable_t *table)
{
if (table) {
memset(&table->ht_stats, 0, sizeof(table->ht_stats));
}
}
void
hashlib_get_stats(
const HashTable *table,
hashlib_stats_t *hash_stats)
{
if (table && hash_stats) {
memcpy(hash_stats, &table->ht_stats, sizeof(table->ht_stats));
}
}
void
hashlib_print_stats(
FILE *fp,
const HashTable *table)
{
const hashlib_stats_t *hts = &table->ht_stats;
fprintf(fp, "Accumulated hashtable statistics:\n");
fprintf(fp, " %" PRIu32 " total allocations.\n", hts->blocks_allocated);
fprintf(fp, " %" PRIu64 " total inserts.\n", hts->inserts);
fprintf(fp, " %" PRIu64 " total lookups.\n", hts->lookups);
fprintf(fp, " %" PRIu32 " total rehashes.\n", hts->rehashes);
fprintf(fp, " %" PRIu64 " inserts due to rehashing.\n",
hts->rehash_inserts);
fprintf(fp, " %" PRIu64 " total finds.\n", hts->find_entries);
fprintf(fp, " %" PRIu64 " total find collisions.\n",
hts->find_collisions);
fprintf(fp, " %" PRIu64 " total collision hops.\n",
hts->collision_hops);
}
#endif /* HASHLIB_RECORD_STATS */
HASH_ITER
hashlib_create_iterator(
const HashTable UNUSED(*table))
{
HASH_ITER iter;
memset(&iter, 0, sizeof(HASH_ITER));
iter.block = HASH_ITER_BEGIN;
return iter;
}
int
hashlib_iterate(
const HashTable *table,
HASH_ITER *iter,
uint8_t **key_pptr,
uint8_t **val_pptr)
{
#ifdef TRACEMSG_LEVEL
static uint64_t so_far = 0;
#endif
HashBlock *block;
uint8_t *entry;
if (iter->block == HASH_ITER_END) {
return ERR_NOMOREENTRIES;
}
if (table->is_sorted && table->num_blocks > 1) {
/* Use sorted iterator if we should */
return hashlib_iterate_sorted(table, iter, key_pptr, val_pptr);
}
/* Start at the first entry in the first block or increment the
* iterator to start looking at the next entry. */
if (iter->block == HASH_ITER_BEGIN) {
/* Initialize the iterator. */
memset(iter, 0, sizeof(HASH_ITER));
#ifdef TRACEMSG_LEVEL
TRACEMSG(2,(TRC_FMT "Iterate. Starting to iterate over HashTable...",
TRC_ARG));
so_far = 0;
#endif
} else {
++iter->index;
}
/* Walk through indices of current block until we find a
* non-empty. Once we reach the end of the block, move on to the
* next block. */
while (iter->block < table->num_blocks) {
/* Select the current block */
block = table->blocks[iter->block];
/* Find the next non-empty entry in the current block (if
* there is one). */
for (entry = HASH_ENTRY_AT(block, iter->index);
iter->index < block->max_entries;
++iter->index, entry += HASH_GET_ENTRY_LEN(block))
{
if (!HASHENTRY_ISEMPTY(block, entry)) {
/* We found an entry, return it */
*key_pptr = HASHENTRY_GET_KEY(block, entry);
*val_pptr = HASHENTRY_GET_VALUE(block, entry);
#ifdef TRACEMSG_LEVEL
++so_far;
#endif
return OK;
}
}
/* At the end of the block. */
TRACEMSG(2,((TRC_FMT "Iterate. Finished block #%u containing %" PRIu64
" entries. Total visted %" PRIu64),
TRC_ARG, iter->block, block->num_entries,so_far));
/* try the next block */
++iter->block;
iter->index = 0;
}
/* We're past the last entry of the last block, so we're done. */
*key_pptr = NULL;
*val_pptr = NULL;
iter->block = HASH_ITER_END;
TRACEMSG(2,(TRC_FMT "Iterate. No more entries. Total visited %"
PRIu64, TRC_ARG, so_far));
return ERR_NOMOREENTRIES;
}
static int
hashlib_iterate_sorted(
const HashTable *table,
HASH_ITER *iter,
uint8_t **key_pptr,
uint8_t **val_pptr)
{
uint8_t *lowest_entry = NULL;
unsigned int k;
assert(iter->block != HASH_ITER_END);
/* Start at the first entry in the first block or increment the
* iterator to start looking at the next entry. */
if (iter->block == HASH_ITER_BEGIN) {
/* Initialize the iterator. */
memset(iter, 0, sizeof(HASH_ITER));
TRACEMSG(2,((TRC_FMT "Iterate. Starting to iterate"
" over sorted HashTable..."), TRC_ARG));
} else {
/* Increment the pointer in the block from which we took the
* entry last time. */
++iter->block_idx[iter->block];
}
/* Find the first available value across all blocks; this is our
* arbitrary "lowest" value. */
for (k = 0; k < table->num_blocks; ++k) {
if (iter->block_idx[k] < table->blocks[k]->num_entries) {
iter->block = k;
lowest_entry = HASH_ENTRY_AT(table->blocks[k],
iter->block_idx[k]);
break;
}
}
if (k == table->num_blocks) {
/* We've processed all blocks. Done. */
*key_pptr = NULL;
*val_pptr = NULL;
iter->block = HASH_ITER_END;
TRACEMSG(2,(TRC_FMT "Iterate. No more entries.", TRC_ARG));
return ERR_NOMOREENTRIES;
}
/* Compare our arbitrary "lowest" with every remaining block to
* find the actual lowest. */
for ( ++k; k < table->num_blocks; ++k) {
if ((iter->block_idx[k] < table->blocks[k]->num_entries)
&& (table->cmp_fn(HASH_ENTRY_AT(table->blocks[k],
iter->block_idx[k]),
lowest_entry,
table->cmp_userdata)
< 0))
{
iter->block = k;
lowest_entry = HASH_ENTRY_AT(table->blocks[k],
iter->block_idx[k]);
}
}
/* return lowest */
*key_pptr = HASHENTRY_GET_KEY(table, lowest_entry);
*val_pptr = HASHENTRY_GET_VALUE(table, lowest_entry);
return OK;
}
uint64_t
hashlib_count_buckets(
const HashTable *table)
{
unsigned int k;
uint64_t total = 0;
for (k = 0; k < table->num_blocks; ++k) {
total += table->blocks[k]->max_entries;
}
return total;
}
uint64_t
hashlib_count_entries(
const HashTable *table)
{
unsigned int k;
uint64_t total = 0;
for (k = 0; k < table->num_blocks; ++k) {
total += table->blocks[k]->num_entries;
TRACEMSG(2,((TRC_FMT "entry count for block #%u is %" PRIu64 "."),
TRC_ARG, k, table->blocks[k]->num_entries));
}
return total;
}
uint64_t
hashlib_count_nonempties(
const HashTable *table)
{
const HashBlock *block;
const uint8_t *entry;
unsigned int k;
uint64_t total;
uint64_t count;
uint64_t i;
total = 0;
for (k = 0; k < table->num_blocks; ++k) {
block = table->blocks[k];
count = 0;
for (i = 0, entry = HASH_ENTRY_AT(block, 0);
i < block->max_entries;
++i, entry += HASH_GET_ENTRY_LEN(block))
{
if (!HASHENTRY_ISEMPTY(block, entry)) {
++count;
}
}
total += count;
TRACEMSG(2,((TRC_FMT "nonempty count for block #%u is %" PRIu64 "."),
TRC_ARG, k, count));
}
return total;
}
/*
* Callback function used by hashlib_sort_entries().
*
* Compare keys in 'a' and 'b' where the length of the keys is
* given by 'v_length'.
*/
static int
hashlib_memcmp_keys(
const void *a,
const void *b,
void *v_length)
{
return memcmp(a, b, *(size_t*)v_length);
}
/*
* move the entries in each block to the front of the block, in
* preparation for sorting the entries
*/
static void
hashlib_make_contiguous(
sk_hashtable_t *table)
{
HashBlock *block;
uint64_t i, j;
unsigned int k;
uint8_t *entry_i;
uint8_t *entry_j;
const uint32_t entry_len = HASH_GET_ENTRY_LEN(table);
const uint32_t value_len = HASH_GET_VALUE_LEN(table);
TRACEMSG(1,(TRC_FMT "Making the HashTable contiguous...", TRC_ARG));
for (k = 0; k < table->num_blocks; ++k) {
TRACEMSG(2,(TRC_FMT "Making block #%u contiguous", TRC_ARG, k));
block = table->blocks[k];
if (0 == block->num_entries) {
continue;
}
/* 'j' starts at the front of the block and moves forward to
* find empty entries. 'i' starts at the end of the block
* and moves backward to find occupied entries. We move
* non-empty entries from 'i' to 'j' to get rid of holes in
* the block. Stop once i and j meet. */
for (j = 0, entry_j = HASH_ENTRY_AT(block, 0);
j < block->max_entries;
++j, entry_j += entry_len)
{
if (HASHENTRY_ISEMPTY(block, entry_j)) {
break;
}
}
for (i = block->max_entries-1, entry_i =HASH_ENTRY_AT(block,i);
i > j;
--i, entry_i -= entry_len)
{
if (!HASHENTRY_ISEMPTY(block, entry_i)) {
memcpy(entry_j, entry_i, entry_len);
/* set i to the empty value */
memcpy(HASHENTRY_GET_VALUE(block, entry_i),
table->no_value, value_len);
/* find next empty value */
for (++j, entry_j += entry_len;
j < i;
++j, entry_j += entry_len)
{
if (HASHENTRY_ISEMPTY(block, entry_j)) {
break;
}
}
}
}
assert(j <= block->num_entries);
}
TRACEMSG(1,(TRC_FMT "Made the HashTable contiguous.", TRC_ARG));
}
int
hashlib_sort_entries_usercmp(
sk_hashtable_t *table,
hashlib_sort_key_cmp_fn cmp_fn,
void *cmp_userdata)
{
HashBlock *block;
const size_t entry_len = HASH_GET_ENTRY_LEN(table);
unsigned int k;
TRACEMSG(1,(TRC_FMT "Sorting the HashTable...", TRC_ARG));
if (NULL == cmp_fn) {
return ERR_BADARGUMENT;
}
if (!table->is_sorted) {
/* first call; make the data in each block contiguous */
hashlib_make_contiguous(table);
}
table->cmp_fn = cmp_fn;
table->cmp_userdata = cmp_userdata;
/* we use qsort to sort each block individually; when iterating,
* return the lowest value among all sorted blocks. */
for (k = 0; k < table->num_blocks; ++k) {
TRACEMSG(2,(TRC_FMT "Sorting block #%u...", TRC_ARG, k));
block = table->blocks[k];
/* sort the block's entries */
skQSort_r(block->data, block->num_entries,
entry_len, table->cmp_fn, table->cmp_userdata);
}
TRACEMSG(1,(TRC_FMT "Sorted the HashTable.", TRC_ARG));
table->is_sorted = 1;
return 0;
}
int
hashlib_sort_entries(
sk_hashtable_t *table)
{
/* pass the key length in the context pointer */
table->keylen_cmp_userdata = HASH_GET_KEY_LEN(table);
return hashlib_sort_entries_usercmp(table, &hashlib_memcmp_keys,
&table->keylen_cmp_userdata);
}
/*
* ********************************************************************
* DEBUGGING FUNCTIONS FOR PRINTING INFO ABOUT A TABLE
* ********************************************************************
*/
static void
hashlib_dump_bytes(
FILE *fp,
const uint8_t *data,
uint64_t data_len)
{
uint64_t j;
for (j = 0; j < data_len; ++j) {
fprintf(fp, "%02x ", data[j]);
}
}
static void
hashlib_dump_block_header(
FILE *fp,
const HashBlock *block)
{
/* Dump header info */
fprintf(fp, ("Block size: \t %" PRIu64 "\n"), block->max_entries);
fprintf(fp, ("Num entries:\t %" PRIu64 " (%2.0f%% full)\n"),
block->num_entries,
100 * (float) block->num_entries / block->max_entries);
fprintf(fp, "Key width:\t %u bytes\n", block->table->key_len);
fprintf(fp, "Value width:\t %u bytes\n", block->table->value_len);
fprintf(fp, "Load factor:\t %u = %2.0f%%\n",
block->table->load_factor,
100 * (float) block->table->load_factor / 255);
fprintf(fp, "Empty value representation: ");
hashlib_dump_bytes(fp, block->table->no_value,
block->table->value_len);
fprintf(fp, "\n");
}
static void
hashlib_dump_block(
FILE *fp,
const HashBlock *block)
{
uint64_t i; /* Index of into hash table */
uint64_t entry_index = 0;
const uint8_t *entry;
hashlib_dump_block_header(fp, block);
fprintf(fp, "Data Dump:\n");
fprintf(fp, "----------\n");
for (i = 0; i < block->max_entries; ++i) {
entry = HASH_ENTRY_AT(block, i);
/* Don't dump empty entries */
if (HASHENTRY_ISEMPTY(block, entry)) {
continue;
}
++entry_index;
/* Dump hash index in table, the key and the value */
fprintf(fp, ("%6" PRIu64 " (%" PRIu64 "). "), entry_index, i);
hashlib_dump_bytes(fp, HASHENTRY_GET_KEY(block, entry),
HASH_GET_KEY_LEN(block));
fprintf(fp, " -- ");
hashlib_dump_bytes(fp, HASHENTRY_GET_VALUE(block, entry),
HASH_GET_VALUE_LEN(block));
fprintf(fp, "\n");
}
}
void
hashlib_dump_table(
FILE *fp,
const HashTable *table)
{
unsigned int k;
hashlib_dump_table_header(fp, table);
for (k = 0; k < table->num_blocks; ++k) {
fprintf(fp, "Block #%u:\n", k);
hashlib_dump_block(fp, table->blocks[k]);
}
}
void
hashlib_dump_table_header(
FILE *fp,
const HashTable *table)
{
unsigned int k;
HashBlock *block;
uint64_t total_used_memory = 0;
uint64_t total_data_memory = 0;
/* Dump header info */
fprintf(fp, "Key width:\t %u bytes\n", table->key_len);
fprintf(fp, "Value width:\t %d bytes\n", table->value_len);
fprintf(fp, "Empty value:\t");
hashlib_dump_bytes(fp, table->no_value, table->value_len);
fprintf(fp, "\n");
fprintf(fp, "Load factor:\t %d = %2.0f%%\n",
table->load_factor,
100 * (float) table->load_factor / 255);
fprintf(fp, ("Table has %" PRIu8 " blocks:\n"), table->num_blocks);
for (k = 0; k < table->num_blocks; ++k) {
block = table->blocks[k];
total_data_memory +=
HASH_GET_ENTRY_LEN(block) * block->max_entries;
total_used_memory +=
HASH_GET_ENTRY_LEN(block) * block->num_entries;
fprintf(fp, (" Block #%u: %" PRIu64 "/%" PRIu64 " (%3.1f%%)\n"),
k, block->num_entries, block->max_entries,
100 * ((float)block->num_entries) / block->max_entries);
}
fprintf(fp, ("Total data memory: %" PRIu64 " bytes\n"),
total_data_memory);
fprintf(fp, ("Total allocated data memory: %" PRIu64 " bytes\n"),
total_used_memory);
fprintf(fp, ("Excess data memory: %" PRIu64 " bytes\n"),
total_data_memory - total_used_memory);
fprintf(fp, "\n");
}
/*
** Local Variables:
** mode:c
** indent-tabs-mode:nil
** c-basic-offset:4
** End:
*/
--=-=-=
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Emacs : GNU Emacs 26.0.50.1 (x86_64-pc-linux-gnu, X toolkit)
of 2016-11-24
Package: CC Mode 5.33 (C/l)
Buffer Style: gnu
c-emacs-features: (pps-extended-state col-0-paren posix-char-classes gen-st=
ring-delim gen-comment-delim syntax-properties 1-bit)
current state:
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
(setq
c-basic-offset 4
c-comment-only-line-offset '(0 . 0)
c-indent-comment-alist '((anchored-comment column . 0) (end-block space . =
1)
(cpp-end-block space . 2))
c-indent-comments-syntactically-p nil
c-block-comment-prefix ""
c-comment-prefix-regexp '((pike-mode . "//+!?\\|\\**") (awk-mode . "#+")
(other . "//+\\|\\**"))
c-doc-comment-style '((java-mode . javadoc) (pike-mode . autodoc)
(c-mode . gtkdoc))
c-cleanup-list '(scope-operator)
c-hanging-braces-alist '((substatement-open before after)
(arglist-cont-nonempty))
c-hanging-colons-alist nil
c-hanging-semi&comma-criteria '(c-semi&comma-inside-parenlist)
c-backslash-column 48
c-backslash-max-column 72
c-special-indent-hook '(c-gnu-impose-minimum)
c-label-minimum-indentation 1
c-offsets-alist '((inexpr-class . +)
(inexpr-statement . +)
(lambda-intro-cont . +)
(inlambda . c-lineup-inexpr-block)
(template-args-cont c-lineup-template-args +)
(incomposition . +)
(inmodule . +)
(innamespace . +)
(inextern-lang . +)
(composition-close . 0)
(module-close . 0)
(namespace-close . 0)
(extern-lang-close . 0)
(composition-open . 0)
(module-open . 0)
(namespace-open . 0)
(extern-lang-open . 0)
(objc-method-call-cont
c-lineup-ObjC-method-call-colons
c-lineup-ObjC-method-call
+
)
(objc-method-args-cont . c-lineup-ObjC-method-args)
(objc-method-intro . [0])
(friend . 0)
(cpp-define-intro c-lineup-cpp-define +)
(cpp-macro-cont . +)
(cpp-macro . [0])
(inclass . +)
(stream-op . c-lineup-streamop)
(arglist-cont-nonempty
c-lineup-gcc-asm-reg
c-lineup-arglist
)
(arglist-cont c-lineup-gcc-asm-reg 0)
(comment-intro
c-lineup-knr-region-comment
c-lineup-comment
)
(catch-clause . 0)
(else-clause . 0)
(do-while-closure . 0)
(access-label . -)
(case-label . 0)
(substatement . +)
(statement-case-intro . +)
(statement . 0)
(brace-entry-open . 0)
(brace-list-entry . 0)
(brace-list-intro . +)
(brace-list-close . 0)
(block-close . 0)
(block-open . 0)
(inher-cont . c-lineup-multi-inher)
(inher-intro . +)
(member-init-cont . c-lineup-multi-inher)
(member-init-intro . +)
(annotation-var-cont . +)
(annotation-top-cont . 0)
(topmost-intro . 0)
(knr-argdecl . 0)
(func-decl-cont . +)
(inline-close . 0)
(class-close . 0)
(class-open . 0)
(defun-block-intro . +)
(defun-close . 0)
(defun-open . 0)
(c . c-lineup-C-comments)
(string . c-lineup-dont-change)
(topmost-intro-cont
first
c-lineup-topmost-intro-cont
c-lineup-gnu-DEFUN-intro-cont
)
(brace-list-open . +)
(inline-open . 0)
(arglist-close . c-lineup-arglist)
(arglist-intro . c-lineup-arglist-intro-after-paren)
(statement-cont . +)
(statement-case-open . +)
(label . 0)
(substatement-label . 0)
(substatement-open . +)
(knr-argdecl-intro . 5)
(statement-block-intro . +)
)
c-buffer-is-cc-mode 'c-mode
c-tab-always-indent t
c-syntactic-indentation t
c-syntactic-indentation-in-macros t
c-ignore-auto-fill '(string cpp code)
c-auto-align-backslashes t
c-backspace-function 'backward-delete-char-untabify
c-delete-function 'delete-char
c-electric-pound-behavior nil
c-default-style '((java-mode . "java") (awk-mode . "awk") (other . "gnu"))
c-enable-xemacs-performance-kludge-p nil
c-old-style-variable-behavior nil
defun-prompt-regexp nil
tab-width 8
comment-column 32
parse-sexp-ignore-comments t
parse-sexp-lookup-properties t
auto-fill-function nil
comment-multi-line t
comment-start-skip "\\(//+\\|/\\*+\\)\\s *"
fill-prefix nil
fill-column 70
paragraph-start "[ ]*\\(//+\\|\\**\\)[ ]*$\\|^\f"
adaptive-fill-mode t
adaptive-fill-regexp "[ ]*\\(//+\\|\\**\\)[ ]*\\([ ]*\\([-=E2=80=93!|#%=
;>*=C2=B7=E2=80=A2=E2=80=A3=E2=81=83=E2=97=A6]+[ ]*\\)*\\)"
)
--=20
Michael Welsh Duggan
(md5i@HIDDEN)
--=-=-=--
Michael Welsh Duggan <mwd@HIDDEN>:bug-cc-mode@HIDDEN.
Full text available.bug-cc-mode@HIDDEN:bug#25021; Package cc-mode.
Full text available.
GNU bug tracking system
Copyright (C) 1999 Darren O. Benham,
1997 nCipher Corporation Ltd,
1994-97 Ian Jackson.