* #589 - Detect broken RDRAND during initialization; also, fix segfault
in the CPUID check.
* #592 - Fix integer overflows to prevert out of bounds write on large input.
-* Protect against division by zero in linkhash, when creaed with zero size.
+* Protect against division by zero in linkhash, when created with zero size.
* #602 - Fix json_parse_uint64() internal error checking, leaving the retval
untouched in more failure cases.
* #614 - Prevent truncation when custom double formatters insert extra \0's
* Use size_t for array length and size. Platforms where sizeof(size_t) != sizeof(int) may not be backwards compatible
See commits 45c56b, 92e9a5 and others.
-* Check for failue when allocating memory, returning NULL and errno=ENOMEM.
+* Check for failure when allocating memory, returning NULL and errno=ENOMEM.
See commit 2149a04.
* Change json_object_object_add() return type from void to int, and will return -1 on failures, instead of exiting. (Note: this is not an ABI change)
* Add an alternative iterator implementation, see json_object_iterator.h
* Make json_object_iter public to enable external use of the
json_object_object_foreachC macro.
- * Add a printbuf_memset() function to provide an effecient way to set and
+ * Add a printbuf_memset() function to provide an efficient way to set and
append things like whitespace indentation.
* Adjust json_object_is_type and json_object_get_type so they return
json_type_null for NULL objects and handle NULL passed to
0.7
===
* Add escaping of backslash to json output
- * Add escaping of foward slash on tokenizing and output
+ * Add escaping of forward slash on tokenizing and output
* Changes to internal tokenizer from using recursion to
using a depth state structure to allow incremental parsing
* beyond the lifetime of the parent object.
* - Detaching an object field or array index from its parent object
* (using `json_object_object_del()` or `json_object_array_del_idx()`)
- * - Sharing a json_object with multiple (not necesarily parallel) threads
+ * - Sharing a json_object with multiple (not necessarily parallel) threads
* of execution that all expect to free it (with `json_object_put()`) when
* they're done.
*
*
* @param obj the json_object instance
* @param val the value to add
- * @returns 1 if the increment succeded, 0 otherwise
+ * @returns 1 if the increment succeeded, 0 otherwise
*/
JSON_EXPORT int json_object_int_inc(struct json_object *obj, int64_t val);
* when custom serializers are in use. See also
* json_object set_serializer.
*
- * @returns 0 if the copy went well, -1 if an error occured during copy
+ * @returns 0 if the copy went well, -1 if an error occurred during copy
* or if the destination pointer is non-NULL
*/
* we can't simply peek ahead here, because the
* characters we need might not be passed to us
* until a subsequent call to json_tokener_parse.
- * Instead, transition throug a couple of states.
+ * Instead, transition through a couple of states.
* (now):
* _escape_unicode => _unicode_need_escape
* (see a '\\' char):
* rest of the string. Every machine with memory protection I've seen
* does it on word boundaries, so is OK with this. But VALGRIND will
* still catch it and complain. The masking trick does make the hash
- * noticably faster for short strings (like English words).
+ * noticeably faster for short strings (like English words).
* AddressSanitizer is similarly picky about overrunning
* the buffer. (http://clang.llvm.org/docs/AddressSanitizer.html
*/
}
/* clang-format on */
-/* a simple hash function similiar to what perl does for strings.
+/* a simple hash function similar to what perl does for strings.
* for good results, the string should not be excessivly large.
*/
static unsigned long lh_perllike_str_hash(const void *k)
if (random_seed == -1)
{
RANDOM_SEED_TYPE seed;
- /* we can't use -1 as it is the unitialized sentinel */
+ /* we can't use -1 as it is the uninitialized sentinel */
while ((seed = json_c_get_random_seed()) == -1) {}
#if SIZEOF_INT == 8 && defined __GCC_HAVE_SYNC_COMPARE_AND_SWAP_8
#define USE_SYNC_COMPARE_AND_SWAP 1
/**
* Calculate the hash of a key for a given table.
*
- * This is an exension to support functions that need to calculate
+ * This is an extension to support functions that need to calculate
* the hash several times and allows them to do it just once and then pass
* in the hash to all utility functions. Depending on use case, this can be a
* considerable performance improvement.