[RESOLVED] Error during meshing [Solved]« Back to Questions List

Here's what I get. End of Processing: ######## /*---------------------------------------------------------------------------*\ | ========= | | | \\ / F ield | OpenFOAM: The Open Source CFD Toolbox | | \\ / O peration | Version: v1606+ | | \\ / A nd | Web: www.OpenFOAM.com | | \\/ M anipulation | | \*---------------------------------------------------------------------------*/ Build : v1606+ Exec : reconstructPar -latestTime Date : Jun 22 2017 Time : 02:33:55 Host : "69747be895ce" PID : 1140 Case : /Users/ricardo/OpenFOAM/docker-v1606+/run/17MO000 nProcs : 1 sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE). fileModificationChecking : Monitoring run-time modified files using timeStampMaster allowSystemOperations : Allowing user-supplied system call operations // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // Create time --> FOAM FATAL ERROR: No times selected From function int main(int, char**) in file reconstructPar.C at line 215. ######## POST: ######## MWFlow Post-Processing Utility Case folder: 17MO000 Plotting of forces failed. FSource folders: [] mSurf folders: [] ptot iso file to check: Traceback (most recent call last): File "MFlow_PP_merged.py", line 2387, in File "MFlow_PP_merged.py", line 400, in HM File "MFlow_PP_merged.py", line 200, in HX WindowsError: [Error 3] The system cannot find the path specified: '17MO000\\postProcessing\\surfaces/*.*' Failed to execute script MFlow_PP_merged PS: Can we have a bigger text box here?
Posted by Idadox
Asked on June 22, 2017 08:41
0

Files sent.
Reconstruct is mine, done after in my post.
This file had warnings, original 2016 version with 100mm HX. Total run time 8’, against 5 hours on 2016 software.
I’m now running it again with 60mm HX, no warnings.

Posted by Idadox
Answered On June 22, 2017 08:53
0

I’d build it from command line. Trying again from the GUI.

Posted by Idadox
Answered On June 22, 2017 12:46
0

Kind of bad news. As you see the case in general runs. Why it fails on your Windows and OSX machines is unclear. We should wait and see if others report similar problems.

Posted by MCAE Support
Answered On June 23, 2017 06:36
0

You case crashed during meshing already. Maybe you can rerun and check if you do not run out of memory. If it fails again you can send the whole case and I will check it.

Posted by MCAE Support
Answered On June 22, 2017 08:55
0

Make sure to not overwrite your log files

Posted by MCAE Support
Answered On June 22, 2017 12:47
0

The windows machine had a power outage. No software problem.

The OSX machine finished it after some memory tweaking with Virtualbox, seems ok now.

The linux AWS VM did it right first time.

Posted by Idadox
Answered On June 23, 2017 09:34
0

It’s running on 49.1% of RAM as per top. I’ll let you konw.

If it need be, use same dropbox link?

Posted by Idadox
Answered On June 22, 2017 09:00
0

Did that already ?

Will be more careful from now on.

Posted by Idadox
Answered On June 22, 2017 12:48
0

Yes, same link but please zip it all up into one file this time.

Posted by MCAE Support
Answered On June 22, 2017 09:04
0

Can you send them back to me?

Posted by Idadox
Answered On June 22, 2017 12:48
0

Meshing ran for a full hour this time, then seems to have crashed again. I’m submitting the geometry and logs on dropbox.

Posted by Idadox
Answered On June 22, 2017 10:02
0

It filed again, I’ll try running in windows.

Posted by Idadox
Answered On June 22, 2017 14:44
0

OK, I have received the files and checked them.
The first thing I see is that your stl files are not very nice. Have a look at the trailing edge of the rear wing end-plates. This can be an issue for the mesher.
I will run the case and try to fix it. I will let you know as soon as I have results.

Posted by MCAE Support
Answered On June 22, 2017 10:12
0

Please yip up the log files of the newly crashed case and send them to me.

Posted by MCAE Support
Answered On June 22, 2017 16:32
0

About that mesh: I have a nicer wing mesh that runs noticeably lower CL on the 2016 version, that’s why it’s being used.

Posted by Idadox
Answered On June 22, 2017 10:16
0

There was a damn power outage here and the windows run aborted before failing.

I did upload the second log set, now done from the GUI and the process lasting over an hour.

Posted by Idadox
Answered On June 22, 2017 20:02
0

Oh you said the endplates, I’ll check

Posted by Idadox
Answered On June 22, 2017 10:17
0

I just checked the log files. One time the mesher crashes after 9 and the other time after 18 layer iteration steps. This shows me that it crashes due to a computer issue. I think you do run out of memory. Sometime the mesher is not very good and needs extra memory. But is it possible that your were using the computer for something else during meshing? Even a web-browser can use crazy amounts of memory.

Posted by MCAE Support
Answered On June 22, 2017 20:07
0

A bit more, from early on processing:

–> FOAM Warning :
From function Foam::label Foam::sampledSurfaces::classifyFields()
in file sampledSurface/sampledSurfaces/sampledSurfacesGrouping.C at line 53
Cannot find field file matching static(p)_coeff
–> FOAM Warning :
From function Foam::label Foam::sampledSurfaces::classifyFields()
in file sampledSurface/sampledSurfaces/sampledSurfacesGrouping.C at line 53
Cannot find field file matching total(p)_coeff
[0] #0 Foam::error::printStack(Foam::Ostream&)[2] #0 Foam::error::printStack(Foam::Ostream&)[4] #0 Foam::error::printStack(Foam::Ostream&)[5] #0 Foam::error::printStack(Foam::Ostream&)[3] #0 Foam::error::printStack(Foam::Ostream&)[1] #0 Foam::error::printStack(Foam::Ostream&) in ”/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so”
[4] #1 Foam::sigFpe::sigHandler(int) in ”/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so”
[3] #1 Foam::sigFpe::sigHandler(int) in ”/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so”
[2] #1 Foam::sigFpe::sigHandler(int) in ”/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so”
[0] #1 Foam::sigFpe::sigHandler(int) in ”/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so”
[5] #1 Foam::sigFpe::sigHandler(int) in ”/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so”
[1] #1 Foam::sigFpe::sigHandler(int) in ”/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so”

Posted by Idadox
Answered On June 22, 2017 08:44
0

The end-plates were only one example. There are a few areas that do not look very nice (the stl tessellation, the car is super cool).

I mesh and am now running your case without any problem. So what is different? Are you using OF v1606+?

Posted by MCAE Support
Answered On June 22, 2017 11:42
0

No it’s just sitting there. I’ll restart everything and test. Do you know any command I could use to log memory usage in linux?

Posted by Idadox
Answered On June 22, 2017 20:48
0

This does not look good. Anyway why is there a reconstructPar command? This is not from MantiumFlow.
Please send all log files from this case to this link:
https://www.dropbox.com/request/9i8A2JRILVtQcibOITjs

Posted by MCAE Support
Answered On June 22, 2017 08:44
0

Yep 1606+ on OSX, see my logs below. What can I try?

Posted by Idadox
Answered On June 22, 2017 12:07
0

I have another running on windows and one linux vm at aws.

Posted by Idadox
Answered On June 22, 2017 20:49
0

Solver.log:

/*—————————————————————————*\
| ========= | |
| \\ / F ield | OpenFOAM: The Open Source CFD Toolbox |
| \\ / O peration | Version: v1606+ |
| \\ / A nd | Web: http://www.OpenFOAM.com |
| \\/ M anipulation | |
\*—————————————————————————*/
Build : v1606+
Exec : simpleFoam -case /Users/ricardo/OpenFOAM/docker-v1606+/run/17MO000 -parallel
Date : Jun 22 2017
Time : 02:31:25
Host : ”69747be895ce”
PID : 984
Case : /Users/ricardo/OpenFOAM/docker-v1606+/run/17MO000
nProcs : 6
Slaves :
5
(
”69747be895ce.985”
”69747be895ce.986”
”69747be895ce.987”
”69747be895ce.988”
”69747be895ce.989”
)

Pstream initialized with:
floatTransfer : 0
nProcsSimpleSum : 0
commsType : nonBlocking
polling iterations : 0
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster
allowSystemOperations : Allowing user-supplied system call operations

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time

Create mesh for time = 0

SIMPLE: convergence criteria
field p tolerance 5e-05
field ”(U|k|epsilon|omega)” tolerance 1e-06

Reading field p

[0]
[0]
[0] –> FOAM FATAL IO ERROR:
[0] Cannot find patchField entry for none_to_HX_R_000
[0]
[0] file: /Users/ricardo/OpenFOAM/docker-v1606+/run/17MO000/processor0/0/p.boundaryField from line 18 to line 12.
[0]
[0] From function void Foam::GeometricField::GeometricBoundaryField::readField(const Foam::DimensionedField&, const Foam::dictionary&) [with Type = double; PatchField = Foam::fvPatchField; GeoMesh = Foam::volMesh]
[0] in file /home/buzz2/pawan/OpenFOAM/OpenFOAM-v1606+/src/OpenFOAM/lnInclude/GeometricBoundaryField.C at line 191.
[0]
FOAM parallel run exiting
[0]
[2]
[2]
[2] –> FOAM FATAL IO ERROR:
[2] Cannot find patchField entry for none_to_HX_R_000
[2]
[2] file: /Users/ricardo/OpenFOAM/docker-v1606+/run/17MO000/processor2/0/p.boundaryField from line 18 to line 12.
[2]
[2] From function void Foam::GeometricField::GeometricBoundaryField::readField(const Foam::DimensionedField&, const Foam::dictionary&) [with Type = double; PatchField = Foam::fvPatchField; GeoMesh = Foam::volMesh]
[2] in file /home/buzz2/pawan/OpenFOAM/OpenFOAM-v1606+/src/OpenFOAM/lnInclude/GeometricBoundaryField.C at line 191.
[2]
FOAM parallel run exiting
[2]
[3]
[3]
[3] –> FOAM FATAL IO ERROR:
[3] Cannot find patchField entry for none_to_HX_R_000
[3]
[3] file: /Users/ricardo/OpenFOAM/docker-v1606+/run/17MO000/processor3/0/p.boundaryField from line 18 to line 12.
[3]
[3] From function void Foam::GeometricField::GeometricBoundaryField::readField(const Foam::DimensionedField&, const Foam::dictionary&) [with Type = double; PatchField = Foam::fvPatchField; GeoMesh = Foam::volMesh]
[3] in file /home/buzz2/pawan/OpenFOAM/OpenFOAM-v1606+/src/OpenFOAM/lnInclude/GeometricBoundaryField.C at line 191.
[3]
FOAM parallel run exiting
[3]
[1]
[1]
[1] –> FOAM FATAL IO ERROR:
[1] Cannot find patchField entry for none_to_HX_R_000
[1]
[1] file: /Users/ricardo/OpenFOAM/docker-v1606+/run/17MO000/processor1/0/p.boundaryField from line 18 to line 12.
[1]
[1] From function void Foam::GeometricField::GeometricBoundaryField::readField(const Foam::DimensionedField&, const Foam::dictionary&) [with Type = double; PatchField = Foam::fvPatchField; GeoMesh = Foam::volMesh]
[1] in file /home/buzz2/pawan/OpenFOAM/OpenFOAM-v1606+/src/OpenFOAM/lnInclude/GeometricBoundaryField.C at line 191.
[1]
FOAM parallel run exiting
[1]
————————————————————————–
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
————————————————————————–
[4]
[4]
[4] –> FOAM FATAL IO ERROR:
[4] Cannot find patchField entry for none_to_HX_R_000
[4]
[4] file: /Users/ricardo/OpenFOAM/docker-v1606+/run/17MO000/processor4/0/p.boundaryField from line 18 to line 12.
[4]
[4] From function void Foam::GeometricField::GeometricBoundaryField::readField(const Foam::DimensionedField&, const Foam::dictionary&) [with Type = double; PatchField = Foam::fvPatchField; GeoMesh = Foam::volMesh]
[4] in file [5]
/home/buzz2/pawan/OpenFOAM/OpenFOAM-v1606+/src/OpenFOAM/lnInclude/GeometricBoundaryField.C[5]
[5] –> FOAM FATAL IO ERROR:
[5] Cannot find patchField entry for none_to_HX_R_000
[5]
[5] file: /Users/ricardo/OpenFOAM/docker-v1606+/run/17MO000/processor5/0/p.boundaryField from line 18 to line 12.
[5]
[5] From function void Foam::GeometricField::GeometricBoundaryField::readField(const Foam::DimensionedField&, const Foam::dictionary&) [with Type = double; PatchField = Foam::fvPatchField; GeoMesh = Foam::volMesh]
[5] in file /home/buzz2/pawan/OpenFOAM/OpenFOAM-v1606+/src/OpenFOAM/lnInclude/GeometricBoundaryField.C at line 191.
[5]
FOAM parallel run exiting
[5]
at line 191.
[4]
FOAM parallel run exiting
[4]
[69747be895ce:00981] 3 more processes have sent help message help-mpi-api.txt / mpi-abort
[69747be895ce:00981] Set MCA parameter ”orte_base_help_aggregate” to 0 to see all help / error messages

Posted by Idadox
Answered On June 22, 2017 08:45
0

As the case should actually work, you will have to check a few things. As the case fails during meshing you should check if the log looks exactly the same every time it fails. Does a different case work? You could try changing the number of CPU cores you use. Make sure to check the system/decomposeDict if MantiumFlow really changed something. Prime numbers for example do not get accepted.

Posted by MCAE Support
Answered On June 22, 2017 12:40
0

The aws case is past meshing and running. Still waiting on OSX and windows.

Posted by Idadox
Answered On June 22, 2017 21:42