ValueError: setting an array element with a sequence Tensorflow and numpyElement-comparison NumPy matrixFind binary sequence in NumPy binary arrayNumPy array filter optimisationSlicing a big NumPy arraySimple Stateful LSTM example with arbitrary sequence lengthTraining MLP classifier with TensorFlow on notMNIST datasetVectorizing a sequence $x_i+1 = f(x_i)$ with NumPyExtending numpy array by replacing each element with a matrixFor-Loop with numpy array too slowSpearman correlations between Numpy array and every Pandas DataFrame row

Logistic function with a slope but no asymptotes?

Why does a 97 / 92 key piano exist by Bösendorfer?

Echo with obfuscation

How to preserve electronics (computers, iPads and phones) for hundreds of years

How to get directions in deep space?

Visualizing the difference curve in a 2D plot?

How to leave product feedback on macOS?

Using streams for a null-safe conversion from an array to list

Given this phrasing in the lease, when should I pay my rent?

Why would five hundred and five be same as one?

The Digit Triangles

Quoting Keynes in a lecture

What is the meaning of "You've never met a graph you didn't like?"

How to reduce predictors the right way for a logistic regression model

Why can't the Brexit deadlock in the UK parliament be solved with a plurality vote?

Is there anyway, I can have two passwords for my wi-fi

In One Punch Man, is King actually weak?

How many people need to be born every 8 years to sustain population?

PTIJ: Which Dr. Seuss books should one obtain?

Limit max CPU usage SQL SERVER with WSRM

Pre-Employment Background Check With Consent For Future Checks

How do I fix the group tension caused by my character stealing and possibly killing without provocation?

How much do grades matter for a future academia position?

Is there a reason to prefer HFS+ over APFS for disk images in High Sierra and/or Mojave?



ValueError: setting an array element with a sequence Tensorflow and numpy


Element-comparison NumPy matrixFind binary sequence in NumPy binary arrayNumPy array filter optimisationSlicing a big NumPy arraySimple Stateful LSTM example with arbitrary sequence lengthTraining MLP classifier with TensorFlow on notMNIST datasetVectorizing a sequence $x_i+1 = f(x_i)$ with NumPyExtending numpy array by replacing each element with a matrixFor-Loop with numpy array too slowSpearman correlations between Numpy array and every Pandas DataFrame row













0












$begingroup$


Trying to run this code, which is a function I wrote myself:



def next_batch(batch_size):
label = [0, 1, 0, 0, 0]
X = []
Y = []
for i in range(0, batch_size):
rand = random.choice(os.listdir(mnist))
rand = mnist + rand
img = cv2.imread(str(rand), 0)
img = np.array(img)
img = img.ravel()
X.append(img)
Y.append(label)
X = np.array(X)
Y = np.array(Y)
return X, Y


Then I want to use the X and Y array for training purpose of my network.
I run it with this code: (Mainly the bottom part of def train(train_model) is where it all goes down



def train(train_model=True):
"""
Used to train the autoencoder by passing in the necessary inputs.
:param train_model: True -> Train the model, False -> Load the latest trained model and show the image grid.
:return: does not return anything
"""
with tf.variable_scope(tf.get_variable_scope()):
encoder_output = encoder(x_input)
# Concat class label and the encoder output
decoder_input = tf.concat([y_input, encoder_output], 1)
decoder_output = decoder(decoder_input)

with tf.variable_scope(tf.get_variable_scope()):
d_real = discriminator(real_distribution)
d_fake = discriminator(encoder_output, reuse=True)

with tf.variable_scope(tf.get_variable_scope()):
decoder_image = decoder(manual_decoder_input, reuse=True)

# Autoencoder loss
autoencoder_loss = tf.reduce_mean(tf.square(x_target - decoder_output))

# Discriminator Loss
dc_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(d_real), logits=d_real))
dc_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.zeros_like(d_fake), logits=d_fake))
dc_loss = dc_loss_fake + dc_loss_real

# Generator loss
generator_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(d_fake), logits=d_fake))

all_variables = tf.trainable_variables()
dc_var = [var for var in all_variables if 'dc_' in var.name]
en_var = [var for var in all_variables if 'e_' in var.name]

# Optimizers
autoencoder_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
beta1=beta1).minimize(autoencoder_loss)
discriminator_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
beta1=beta1).minimize(dc_loss, var_list=dc_var)
generator_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
beta1=beta1).minimize(generator_loss, var_list=en_var)

init = tf.global_variables_initializer()

# Reshape images to display them
input_images = tf.reshape(x_input, [-1, 368, 432, 1])
generated_images = tf.reshape(decoder_output, [-1, 368, 432, 1])

# Tensorboard visualization
tf.summary.scalar(name='Autoencoder Loss', tensor=autoencoder_loss)
tf.summary.scalar(name='Discriminator Loss', tensor=dc_loss)
tf.summary.scalar(name='Generator Loss', tensor=generator_loss)
tf.summary.histogram(name='Encoder Distribution', values=encoder_output)
tf.summary.histogram(name='Real Distribution', values=real_distribution)
tf.summary.image(name='Input Images', tensor=input_images, max_outputs=10)
tf.summary.image(name='Generated Images', tensor=generated_images, max_outputs=10)
summary_op = tf.summary.merge_all()

# Saving the model
saver = tf.train.Saver()
step = 0
with tf.Session() as sess:
if train_model:
tensorboard_path, saved_model_path, log_path = form_results()
sess.run(init)
writer = tf.summary.FileWriter(logdir=tensorboard_path, graph=sess.graph)
for i in range(n_epochs):
# print(n_epochs)
n_batches = int(10000 / batch_size)
print("------------------Epoch /------------------".format(i, n_epochs))
for b in range(1, n_batches+1):
# print("In the loop")
z_real_dist = np.random.randn(batch_size, z_dim) * 5.
batch_x, batch_y = next_batch(batch_size)
# print("Created the batches")
sess.run(autoencoder_optimizer, feed_dict=x_input: batch_x, x_target: batch_x, y_input: batch_y)
print("batch_x", batch_x)
print("x_input:", x_input)
print("x_target:", x_target)
print("y_input:", y_input)
sess.run(discriminator_optimizer,
feed_dict=x_input: batch_x, x_target: batch_x, real_distribution: z_real_dist)
sess.run(generator_optimizer, feed_dict=x_input: batch_x, x_target: batch_x)
# print("setup the session")
if b % 50 == 0:
a_loss, d_loss, g_loss, summary = sess.run(
[autoencoder_loss, dc_loss, generator_loss, summary_op],
feed_dict=x_input: batch_x, x_target: batch_x,
real_distribution: z_real_dist, y_input: batch_y)
writer.add_summary(summary, global_step=step)
print("Epoch: , iteration: ".format(i, b))
print("Autoencoder Loss: ".format(a_loss))
print("Discriminator Loss: ".format(d_loss))
print("Generator Loss: ".format(g_loss))
with open(log_path + '/log.txt', 'a') as log:
log.write("Epoch: , iteration: n".format(i, b))
log.write("Autoencoder Loss: n".format(a_loss))
log.write("Discriminator Loss: n".format(d_loss))
log.write("Generator Loss: n".format(g_loss))
step += 1

saver.save(sess, save_path=saved_model_path, global_step=step)
else:
# Get the latest results folder
all_results = os.listdir(results_path)
all_results.sort()
saver.restore(sess, save_path=tf.train.latest_checkpoint(results_path + '/' +
all_results[-1] + '/Saved_models/'))
generate_image_grid(sess, op=decoder_image)


if __name__ == '__main__':
parser = argparse.ArgumentParser(description="Autoencoder Train Parameter")
parser.add_argument('--train', '-t', type=bool, default=True,
help='Set to True to train a new model, False to load weights and display image grid')
args = parser.parse_args()
train(train_model=args.train)


Getting this error message:




Traceback (most recent call last):
File "/Users/frederikcalsius/Desktop/adv/supervised_adversarial_autoencoder.py", line 290, in
train(train_model=args.train)
File "/Users/frederikcalsius/Desktop/adv/supervised_adversarial_autoencoder.py", line 249, in train
sess.run(autoencoder_optimizer, feed_dict=x_input: batch_x, x_target: batch_x, y_input: batch_y)
File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/tensorflow/python/client/session.py", line 1121, in _run
np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/numpy/core/numeric.py", line 538, in asarray
return array(a, dtype, copy=False, order=order)



ValueError: setting an array element with a sequence.



Process finished with exit code 1




I really don't get this error. Can somebody help me out with this?









share







New contributor




FreddyGump is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$
















    0












    $begingroup$


    Trying to run this code, which is a function I wrote myself:



    def next_batch(batch_size):
    label = [0, 1, 0, 0, 0]
    X = []
    Y = []
    for i in range(0, batch_size):
    rand = random.choice(os.listdir(mnist))
    rand = mnist + rand
    img = cv2.imread(str(rand), 0)
    img = np.array(img)
    img = img.ravel()
    X.append(img)
    Y.append(label)
    X = np.array(X)
    Y = np.array(Y)
    return X, Y


    Then I want to use the X and Y array for training purpose of my network.
    I run it with this code: (Mainly the bottom part of def train(train_model) is where it all goes down



    def train(train_model=True):
    """
    Used to train the autoencoder by passing in the necessary inputs.
    :param train_model: True -> Train the model, False -> Load the latest trained model and show the image grid.
    :return: does not return anything
    """
    with tf.variable_scope(tf.get_variable_scope()):
    encoder_output = encoder(x_input)
    # Concat class label and the encoder output
    decoder_input = tf.concat([y_input, encoder_output], 1)
    decoder_output = decoder(decoder_input)

    with tf.variable_scope(tf.get_variable_scope()):
    d_real = discriminator(real_distribution)
    d_fake = discriminator(encoder_output, reuse=True)

    with tf.variable_scope(tf.get_variable_scope()):
    decoder_image = decoder(manual_decoder_input, reuse=True)

    # Autoencoder loss
    autoencoder_loss = tf.reduce_mean(tf.square(x_target - decoder_output))

    # Discriminator Loss
    dc_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(d_real), logits=d_real))
    dc_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.zeros_like(d_fake), logits=d_fake))
    dc_loss = dc_loss_fake + dc_loss_real

    # Generator loss
    generator_loss = tf.reduce_mean(
    tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(d_fake), logits=d_fake))

    all_variables = tf.trainable_variables()
    dc_var = [var for var in all_variables if 'dc_' in var.name]
    en_var = [var for var in all_variables if 'e_' in var.name]

    # Optimizers
    autoencoder_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
    beta1=beta1).minimize(autoencoder_loss)
    discriminator_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
    beta1=beta1).minimize(dc_loss, var_list=dc_var)
    generator_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
    beta1=beta1).minimize(generator_loss, var_list=en_var)

    init = tf.global_variables_initializer()

    # Reshape images to display them
    input_images = tf.reshape(x_input, [-1, 368, 432, 1])
    generated_images = tf.reshape(decoder_output, [-1, 368, 432, 1])

    # Tensorboard visualization
    tf.summary.scalar(name='Autoencoder Loss', tensor=autoencoder_loss)
    tf.summary.scalar(name='Discriminator Loss', tensor=dc_loss)
    tf.summary.scalar(name='Generator Loss', tensor=generator_loss)
    tf.summary.histogram(name='Encoder Distribution', values=encoder_output)
    tf.summary.histogram(name='Real Distribution', values=real_distribution)
    tf.summary.image(name='Input Images', tensor=input_images, max_outputs=10)
    tf.summary.image(name='Generated Images', tensor=generated_images, max_outputs=10)
    summary_op = tf.summary.merge_all()

    # Saving the model
    saver = tf.train.Saver()
    step = 0
    with tf.Session() as sess:
    if train_model:
    tensorboard_path, saved_model_path, log_path = form_results()
    sess.run(init)
    writer = tf.summary.FileWriter(logdir=tensorboard_path, graph=sess.graph)
    for i in range(n_epochs):
    # print(n_epochs)
    n_batches = int(10000 / batch_size)
    print("------------------Epoch /------------------".format(i, n_epochs))
    for b in range(1, n_batches+1):
    # print("In the loop")
    z_real_dist = np.random.randn(batch_size, z_dim) * 5.
    batch_x, batch_y = next_batch(batch_size)
    # print("Created the batches")
    sess.run(autoencoder_optimizer, feed_dict=x_input: batch_x, x_target: batch_x, y_input: batch_y)
    print("batch_x", batch_x)
    print("x_input:", x_input)
    print("x_target:", x_target)
    print("y_input:", y_input)
    sess.run(discriminator_optimizer,
    feed_dict=x_input: batch_x, x_target: batch_x, real_distribution: z_real_dist)
    sess.run(generator_optimizer, feed_dict=x_input: batch_x, x_target: batch_x)
    # print("setup the session")
    if b % 50 == 0:
    a_loss, d_loss, g_loss, summary = sess.run(
    [autoencoder_loss, dc_loss, generator_loss, summary_op],
    feed_dict=x_input: batch_x, x_target: batch_x,
    real_distribution: z_real_dist, y_input: batch_y)
    writer.add_summary(summary, global_step=step)
    print("Epoch: , iteration: ".format(i, b))
    print("Autoencoder Loss: ".format(a_loss))
    print("Discriminator Loss: ".format(d_loss))
    print("Generator Loss: ".format(g_loss))
    with open(log_path + '/log.txt', 'a') as log:
    log.write("Epoch: , iteration: n".format(i, b))
    log.write("Autoencoder Loss: n".format(a_loss))
    log.write("Discriminator Loss: n".format(d_loss))
    log.write("Generator Loss: n".format(g_loss))
    step += 1

    saver.save(sess, save_path=saved_model_path, global_step=step)
    else:
    # Get the latest results folder
    all_results = os.listdir(results_path)
    all_results.sort()
    saver.restore(sess, save_path=tf.train.latest_checkpoint(results_path + '/' +
    all_results[-1] + '/Saved_models/'))
    generate_image_grid(sess, op=decoder_image)


    if __name__ == '__main__':
    parser = argparse.ArgumentParser(description="Autoencoder Train Parameter")
    parser.add_argument('--train', '-t', type=bool, default=True,
    help='Set to True to train a new model, False to load weights and display image grid')
    args = parser.parse_args()
    train(train_model=args.train)


    Getting this error message:




    Traceback (most recent call last):
    File "/Users/frederikcalsius/Desktop/adv/supervised_adversarial_autoencoder.py", line 290, in
    train(train_model=args.train)
    File "/Users/frederikcalsius/Desktop/adv/supervised_adversarial_autoencoder.py", line 249, in train
    sess.run(autoencoder_optimizer, feed_dict=x_input: batch_x, x_target: batch_x, y_input: batch_y)
    File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/tensorflow/python/client/session.py", line 929, in run
    run_metadata_ptr)
    File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/tensorflow/python/client/session.py", line 1121, in _run
    np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
    File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/numpy/core/numeric.py", line 538, in asarray
    return array(a, dtype, copy=False, order=order)



    ValueError: setting an array element with a sequence.



    Process finished with exit code 1




    I really don't get this error. Can somebody help me out with this?









    share







    New contributor




    FreddyGump is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.







    $endgroup$














      0












      0








      0





      $begingroup$


      Trying to run this code, which is a function I wrote myself:



      def next_batch(batch_size):
      label = [0, 1, 0, 0, 0]
      X = []
      Y = []
      for i in range(0, batch_size):
      rand = random.choice(os.listdir(mnist))
      rand = mnist + rand
      img = cv2.imread(str(rand), 0)
      img = np.array(img)
      img = img.ravel()
      X.append(img)
      Y.append(label)
      X = np.array(X)
      Y = np.array(Y)
      return X, Y


      Then I want to use the X and Y array for training purpose of my network.
      I run it with this code: (Mainly the bottom part of def train(train_model) is where it all goes down



      def train(train_model=True):
      """
      Used to train the autoencoder by passing in the necessary inputs.
      :param train_model: True -> Train the model, False -> Load the latest trained model and show the image grid.
      :return: does not return anything
      """
      with tf.variable_scope(tf.get_variable_scope()):
      encoder_output = encoder(x_input)
      # Concat class label and the encoder output
      decoder_input = tf.concat([y_input, encoder_output], 1)
      decoder_output = decoder(decoder_input)

      with tf.variable_scope(tf.get_variable_scope()):
      d_real = discriminator(real_distribution)
      d_fake = discriminator(encoder_output, reuse=True)

      with tf.variable_scope(tf.get_variable_scope()):
      decoder_image = decoder(manual_decoder_input, reuse=True)

      # Autoencoder loss
      autoencoder_loss = tf.reduce_mean(tf.square(x_target - decoder_output))

      # Discriminator Loss
      dc_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(d_real), logits=d_real))
      dc_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.zeros_like(d_fake), logits=d_fake))
      dc_loss = dc_loss_fake + dc_loss_real

      # Generator loss
      generator_loss = tf.reduce_mean(
      tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(d_fake), logits=d_fake))

      all_variables = tf.trainable_variables()
      dc_var = [var for var in all_variables if 'dc_' in var.name]
      en_var = [var for var in all_variables if 'e_' in var.name]

      # Optimizers
      autoencoder_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
      beta1=beta1).minimize(autoencoder_loss)
      discriminator_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
      beta1=beta1).minimize(dc_loss, var_list=dc_var)
      generator_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
      beta1=beta1).minimize(generator_loss, var_list=en_var)

      init = tf.global_variables_initializer()

      # Reshape images to display them
      input_images = tf.reshape(x_input, [-1, 368, 432, 1])
      generated_images = tf.reshape(decoder_output, [-1, 368, 432, 1])

      # Tensorboard visualization
      tf.summary.scalar(name='Autoencoder Loss', tensor=autoencoder_loss)
      tf.summary.scalar(name='Discriminator Loss', tensor=dc_loss)
      tf.summary.scalar(name='Generator Loss', tensor=generator_loss)
      tf.summary.histogram(name='Encoder Distribution', values=encoder_output)
      tf.summary.histogram(name='Real Distribution', values=real_distribution)
      tf.summary.image(name='Input Images', tensor=input_images, max_outputs=10)
      tf.summary.image(name='Generated Images', tensor=generated_images, max_outputs=10)
      summary_op = tf.summary.merge_all()

      # Saving the model
      saver = tf.train.Saver()
      step = 0
      with tf.Session() as sess:
      if train_model:
      tensorboard_path, saved_model_path, log_path = form_results()
      sess.run(init)
      writer = tf.summary.FileWriter(logdir=tensorboard_path, graph=sess.graph)
      for i in range(n_epochs):
      # print(n_epochs)
      n_batches = int(10000 / batch_size)
      print("------------------Epoch /------------------".format(i, n_epochs))
      for b in range(1, n_batches+1):
      # print("In the loop")
      z_real_dist = np.random.randn(batch_size, z_dim) * 5.
      batch_x, batch_y = next_batch(batch_size)
      # print("Created the batches")
      sess.run(autoencoder_optimizer, feed_dict=x_input: batch_x, x_target: batch_x, y_input: batch_y)
      print("batch_x", batch_x)
      print("x_input:", x_input)
      print("x_target:", x_target)
      print("y_input:", y_input)
      sess.run(discriminator_optimizer,
      feed_dict=x_input: batch_x, x_target: batch_x, real_distribution: z_real_dist)
      sess.run(generator_optimizer, feed_dict=x_input: batch_x, x_target: batch_x)
      # print("setup the session")
      if b % 50 == 0:
      a_loss, d_loss, g_loss, summary = sess.run(
      [autoencoder_loss, dc_loss, generator_loss, summary_op],
      feed_dict=x_input: batch_x, x_target: batch_x,
      real_distribution: z_real_dist, y_input: batch_y)
      writer.add_summary(summary, global_step=step)
      print("Epoch: , iteration: ".format(i, b))
      print("Autoencoder Loss: ".format(a_loss))
      print("Discriminator Loss: ".format(d_loss))
      print("Generator Loss: ".format(g_loss))
      with open(log_path + '/log.txt', 'a') as log:
      log.write("Epoch: , iteration: n".format(i, b))
      log.write("Autoencoder Loss: n".format(a_loss))
      log.write("Discriminator Loss: n".format(d_loss))
      log.write("Generator Loss: n".format(g_loss))
      step += 1

      saver.save(sess, save_path=saved_model_path, global_step=step)
      else:
      # Get the latest results folder
      all_results = os.listdir(results_path)
      all_results.sort()
      saver.restore(sess, save_path=tf.train.latest_checkpoint(results_path + '/' +
      all_results[-1] + '/Saved_models/'))
      generate_image_grid(sess, op=decoder_image)


      if __name__ == '__main__':
      parser = argparse.ArgumentParser(description="Autoencoder Train Parameter")
      parser.add_argument('--train', '-t', type=bool, default=True,
      help='Set to True to train a new model, False to load weights and display image grid')
      args = parser.parse_args()
      train(train_model=args.train)


      Getting this error message:




      Traceback (most recent call last):
      File "/Users/frederikcalsius/Desktop/adv/supervised_adversarial_autoencoder.py", line 290, in
      train(train_model=args.train)
      File "/Users/frederikcalsius/Desktop/adv/supervised_adversarial_autoencoder.py", line 249, in train
      sess.run(autoencoder_optimizer, feed_dict=x_input: batch_x, x_target: batch_x, y_input: batch_y)
      File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/tensorflow/python/client/session.py", line 929, in run
      run_metadata_ptr)
      File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/tensorflow/python/client/session.py", line 1121, in _run
      np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
      File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/numpy/core/numeric.py", line 538, in asarray
      return array(a, dtype, copy=False, order=order)



      ValueError: setting an array element with a sequence.



      Process finished with exit code 1




      I really don't get this error. Can somebody help me out with this?









      share







      New contributor




      FreddyGump is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.







      $endgroup$




      Trying to run this code, which is a function I wrote myself:



      def next_batch(batch_size):
      label = [0, 1, 0, 0, 0]
      X = []
      Y = []
      for i in range(0, batch_size):
      rand = random.choice(os.listdir(mnist))
      rand = mnist + rand
      img = cv2.imread(str(rand), 0)
      img = np.array(img)
      img = img.ravel()
      X.append(img)
      Y.append(label)
      X = np.array(X)
      Y = np.array(Y)
      return X, Y


      Then I want to use the X and Y array for training purpose of my network.
      I run it with this code: (Mainly the bottom part of def train(train_model) is where it all goes down



      def train(train_model=True):
      """
      Used to train the autoencoder by passing in the necessary inputs.
      :param train_model: True -> Train the model, False -> Load the latest trained model and show the image grid.
      :return: does not return anything
      """
      with tf.variable_scope(tf.get_variable_scope()):
      encoder_output = encoder(x_input)
      # Concat class label and the encoder output
      decoder_input = tf.concat([y_input, encoder_output], 1)
      decoder_output = decoder(decoder_input)

      with tf.variable_scope(tf.get_variable_scope()):
      d_real = discriminator(real_distribution)
      d_fake = discriminator(encoder_output, reuse=True)

      with tf.variable_scope(tf.get_variable_scope()):
      decoder_image = decoder(manual_decoder_input, reuse=True)

      # Autoencoder loss
      autoencoder_loss = tf.reduce_mean(tf.square(x_target - decoder_output))

      # Discriminator Loss
      dc_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(d_real), logits=d_real))
      dc_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.zeros_like(d_fake), logits=d_fake))
      dc_loss = dc_loss_fake + dc_loss_real

      # Generator loss
      generator_loss = tf.reduce_mean(
      tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(d_fake), logits=d_fake))

      all_variables = tf.trainable_variables()
      dc_var = [var for var in all_variables if 'dc_' in var.name]
      en_var = [var for var in all_variables if 'e_' in var.name]

      # Optimizers
      autoencoder_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
      beta1=beta1).minimize(autoencoder_loss)
      discriminator_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
      beta1=beta1).minimize(dc_loss, var_list=dc_var)
      generator_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
      beta1=beta1).minimize(generator_loss, var_list=en_var)

      init = tf.global_variables_initializer()

      # Reshape images to display them
      input_images = tf.reshape(x_input, [-1, 368, 432, 1])
      generated_images = tf.reshape(decoder_output, [-1, 368, 432, 1])

      # Tensorboard visualization
      tf.summary.scalar(name='Autoencoder Loss', tensor=autoencoder_loss)
      tf.summary.scalar(name='Discriminator Loss', tensor=dc_loss)
      tf.summary.scalar(name='Generator Loss', tensor=generator_loss)
      tf.summary.histogram(name='Encoder Distribution', values=encoder_output)
      tf.summary.histogram(name='Real Distribution', values=real_distribution)
      tf.summary.image(name='Input Images', tensor=input_images, max_outputs=10)
      tf.summary.image(name='Generated Images', tensor=generated_images, max_outputs=10)
      summary_op = tf.summary.merge_all()

      # Saving the model
      saver = tf.train.Saver()
      step = 0
      with tf.Session() as sess:
      if train_model:
      tensorboard_path, saved_model_path, log_path = form_results()
      sess.run(init)
      writer = tf.summary.FileWriter(logdir=tensorboard_path, graph=sess.graph)
      for i in range(n_epochs):
      # print(n_epochs)
      n_batches = int(10000 / batch_size)
      print("------------------Epoch /------------------".format(i, n_epochs))
      for b in range(1, n_batches+1):
      # print("In the loop")
      z_real_dist = np.random.randn(batch_size, z_dim) * 5.
      batch_x, batch_y = next_batch(batch_size)
      # print("Created the batches")
      sess.run(autoencoder_optimizer, feed_dict=x_input: batch_x, x_target: batch_x, y_input: batch_y)
      print("batch_x", batch_x)
      print("x_input:", x_input)
      print("x_target:", x_target)
      print("y_input:", y_input)
      sess.run(discriminator_optimizer,
      feed_dict=x_input: batch_x, x_target: batch_x, real_distribution: z_real_dist)
      sess.run(generator_optimizer, feed_dict=x_input: batch_x, x_target: batch_x)
      # print("setup the session")
      if b % 50 == 0:
      a_loss, d_loss, g_loss, summary = sess.run(
      [autoencoder_loss, dc_loss, generator_loss, summary_op],
      feed_dict=x_input: batch_x, x_target: batch_x,
      real_distribution: z_real_dist, y_input: batch_y)
      writer.add_summary(summary, global_step=step)
      print("Epoch: , iteration: ".format(i, b))
      print("Autoencoder Loss: ".format(a_loss))
      print("Discriminator Loss: ".format(d_loss))
      print("Generator Loss: ".format(g_loss))
      with open(log_path + '/log.txt', 'a') as log:
      log.write("Epoch: , iteration: n".format(i, b))
      log.write("Autoencoder Loss: n".format(a_loss))
      log.write("Discriminator Loss: n".format(d_loss))
      log.write("Generator Loss: n".format(g_loss))
      step += 1

      saver.save(sess, save_path=saved_model_path, global_step=step)
      else:
      # Get the latest results folder
      all_results = os.listdir(results_path)
      all_results.sort()
      saver.restore(sess, save_path=tf.train.latest_checkpoint(results_path + '/' +
      all_results[-1] + '/Saved_models/'))
      generate_image_grid(sess, op=decoder_image)


      if __name__ == '__main__':
      parser = argparse.ArgumentParser(description="Autoencoder Train Parameter")
      parser.add_argument('--train', '-t', type=bool, default=True,
      help='Set to True to train a new model, False to load weights and display image grid')
      args = parser.parse_args()
      train(train_model=args.train)


      Getting this error message:




      Traceback (most recent call last):
      File "/Users/frederikcalsius/Desktop/adv/supervised_adversarial_autoencoder.py", line 290, in
      train(train_model=args.train)
      File "/Users/frederikcalsius/Desktop/adv/supervised_adversarial_autoencoder.py", line 249, in train
      sess.run(autoencoder_optimizer, feed_dict=x_input: batch_x, x_target: batch_x, y_input: batch_y)
      File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/tensorflow/python/client/session.py", line 929, in run
      run_metadata_ptr)
      File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/tensorflow/python/client/session.py", line 1121, in _run
      np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
      File "/Users/frederikcalsius/Library/Python/3.7/lib/python/site-packages/numpy/core/numeric.py", line 538, in asarray
      return array(a, dtype, copy=False, order=order)



      ValueError: setting an array element with a sequence.



      Process finished with exit code 1




      I really don't get this error. Can somebody help me out with this?







      python python-3.x numpy neural-network tensorflow





      share







      New contributor




      FreddyGump is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.










      share







      New contributor




      FreddyGump is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.








      share



      share






      New contributor




      FreddyGump is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 5 mins ago









      FreddyGumpFreddyGump

      11




      11




      New contributor




      FreddyGump is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      FreddyGump is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      FreddyGump is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.




















          0






          active

          oldest

          votes











          Your Answer





          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "196"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );






          FreddyGump is a new contributor. Be nice, and check out our Code of Conduct.









          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f215906%2fvalueerror-setting-an-array-element-with-a-sequence-tensorflow-and-numpy%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          FreddyGump is a new contributor. Be nice, and check out our Code of Conduct.









          draft saved

          draft discarded


















          FreddyGump is a new contributor. Be nice, and check out our Code of Conduct.












          FreddyGump is a new contributor. Be nice, and check out our Code of Conduct.











          FreddyGump is a new contributor. Be nice, and check out our Code of Conduct.














          Thanks for contributing an answer to Code Review Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f215906%2fvalueerror-setting-an-array-element-with-a-sequence-tensorflow-and-numpy%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Why is a white electrical wire connected to 2 black wires?How to wire a light fixture with 3 white wires in box?How should I wire a ceiling fan when there's only three wires in the box?Two white, two black, two ground, and red wire in ceiling box connected to switchWhy is there a white wire connected to multiple black wires in my light box?How to wire a light with two white wires and one black wireReplace light switch connected to a power outlet with dimmer - two black wires to one black and redHow to wire a light with multiple black/white/green wires from the ceiling?Ceiling box has 2 black and white wires but fan/ light only has 1 of eachWhy neutral wire connected to load wire?Switch with 2 black, 2 white, 2 ground and 1 red wire connected to ceiling light and a receptacle?

          कुँवर स्रोत दिक्चालन सूची"कुँवर""राणा कुँवरके वंशावली"

          सि.चक भराडी, कोश्याँकुटोली तहसील इन्हें भी देखें बाहरी कड़ियाँ दिक्चालन सूची(निर्देशांक ढूँढें)www.uttara.gov.inउत्तराखण्ड - भारत सरकार के आधिकारिक पोर्टल परउत्तराखण्ड सरकार का आधिकारिक जालपृष्ठउत्तराखण्डउत्तरा कृषि प्रभासंबढ़ाने मेंसं