开发者

Character arrays in C

开发者 https://www.devze.com 2023-03-27 10:33 出处:网络
I\'m new to c. Just have a question about the character arrays (or string) in c: When I want to create a character array in C, do I have to give the size at the same time?

I'm new to c. Just have a question about the character arrays (or string) in c: When I want to create a character array in C, do I have to give the size at the same time?

Because we may not know the size that we actually need. For example of client-server program, if we want to declare a character array for the server program to receive a message from the client program, but we don't know the size of the m开发者_开发技巧essage, we could do it like this:

char buffer[1000];
recv(fd,buffer, 1000, 0);

But what if the actual message is only of length 10. Will that cause a lot of wasted memory?


Yes, you have to decide the dimension in advance, even if you use malloc.

When you read from sockets, as in the example, you usually use a buffer with a reasonable size, and dispatch data in other structure as soon you consume it. In any case, 1000 bytes is not a so much memory waste and is for sure faster than asking a byte at a time from some memory manager :)


Yes, you have to give the size if you are not initializing the char array at the time of declaration. Better approach for your problem is to identify the optimum size of the buffer at run time and dynamically allocate the memory.


What you're asking about is how to dynamically size a buffer. This is done with a dynamic allocation such as using malloc() -- a memory allocator. Using it gives you an important responsibility though: when you're done using the buffer you must return it to the system yourself. If using malloc() [or calloc()], you return it with free().

For example:

char *buffer; // pointer to a buffer -- essentially an unsized array
buffer = (char *)malloc(size);
// use the buffer ...
free(buffer); // return the buffer -- do NOT use it any more!

The only problem left to solve is how to determine the size you'll need. If you're recv()'ing data that hints at the size, you'll need to break the communication into two recv() calls: first getting the minimum size all packets will have, then allocating the full buffer, then recv'ing the rest.


When you don't know the exact amount of input data, do as follows:

  1. Create a small buffer
  2. Allocate some memory for a "storage" (e.g. twice of buffer size)
  3. Fill the buffer with the data from the input stream (e.g. socket, file etc.)
  4. Copy the data from the buffer to the storage

    4.1 If there is not enough place in storage, re-allocate the memory (e.g. with a size twice bigger than it is at this point)

  5. Do steps 3 and 4 unless the "END OF STREAM"

    Your storage contains the data now.


If you don't know the size a-priori, then you have no choice but to create it dynamically using malloc (or whatever equivalent mechanism in your language of choice.)

size_t buffer_size = ...; /* read from a DEFINE or from a config file */
char * buffer = malloc( sizeof( char ) * (buffer_size + 1) );

Creating a buffer of size m, but only receiving an input string of size n with n < m is not a waste of memory, but an engineering compromise.

If you create your buffer with a size close to the intended input, you risk having to refill the buffer many, many times for those cases where m >> n. Typically, iterations over the buffer are tied up with I/O operations, so now you might be saving some bytes (which is really nothing in today's hardware) at the expense of potentially increasing the problems in some other end. Specially for client-server apps. If we were talking about resource-constrained embedded systems, that'd be another thing.

You should be worrying about getting your algorithms right and solid. Then you worry, if you can, about shaving off a few bytes here and there.

For me, I'd rather create a buffer that is 2 to 10 times greater than the average input (not the smallest input as in your case, but the average), assuming my input tends to have a slow standard deviation in size. Otherwise, I'd go 20 times the size or more (specially if memory is cheap and doing this minimizes hitting the disk or the NIC card.)

At the most basic setup, one typically gets the size of the buffer as a configuration item read off a file (or passed as an argument), and defaulting to a default compile time value if none is provided. Then you can adjust the size of your buffers according to the observed input sizes.

More elaborate algorithms (say TCP) adjust the size of their buffers at run-time to better accommodate input whose size might/will change over time.


Even if you use malloc you also must define the size first! So instead you give a large number that is capable of accepting the message like:

int buffer[2000];   

In case of small message or large you can reallocate it to release the unused locations or to occupy the unused locations

example:
int main()
{
 char *str;

  /* Initial memory allocation */
  str = (char *) malloc(15);
  strcpy(str, "tutorialspoint");
  printf("String = %s,  Address = %u\n", str, str);

  /* Reallocating memory */
  str = (char *) realloc(str, 25);
  strcat(str, ".com");
  printf("String = %s,  Address = %u\n", str, str);

  free(str);

  return(0);
 }

Note: make sure to include stdlib.h library

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号